text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
|---|---|---|---|---|
)
Mahesh Chand(5)
Sanjay Debnath(4)
Jignesh Trivedi(3)
Asma Khalid(3)
Monica Rathbun(3)
Ravi Shankar(3)
Santhakumar Munuswamy(2)
Mukesh Kumar(2)
Shashangka Shekhar(2)
Suthahar J(2)
Mohammed Rameez Khan(2)
Chris Love(2)
Iqra Ali(2)
Tahir Naushad(2)
Rebai Hamida(2)
Ibrahim Ersoy(2)
Sumit Singh Sisodia(2)
Dennis Thomas(2)
Mangesh Kulkarni(2)
Prashant Kumar(2)
Ankit Sharma(2)
Nirav Daraniya(2)
Nithya Mathan(2)
Ck Nitin(1)
Amit Kumar Singh(1)
N Vinodh(1)
Arthur Le Meur(1)
Piyush Agarwal(1)
Raj Kumar(1)
Gaurav Jain(1)
Mushtaq M A(1)
Shantha Kumar T(1)
Shweta Lodha(1)
Kamlesh Kumar(1)
Mahesh Verma(1)
Lou Troilo(1)
Vijai Anand Ramalingam(1)
Hemant Panchal(1)
Padmalatha Dronamraju(1)
Rahul Saraswat(1)
Pritam Zope(1)
Priyanka Mane(1)
Shiv Sharma(1)
Neel Bhatt(1)
Lakpriya Ganidu(1)
P K Yadav(1)
Thivagar Segar(1)
Jayanthi P(1)
Kailash Chandra Behera(1)
Kantesh Sinha(1)
Gowtham K(1)
Ajith Kumar(1)
Vikas Srivastava(1)
Munish A(1)
Puja Kose(1)
Dhruvin Shah(1)
Ahsan Siddique(1)
Najuma Mahamuth(1)
Jinal Shah(1)
Mohammad Irshad(1)
Sibeesh Venu(1)
Sarath Jayachandran ..
Creating Stop Watch Android Application Tutorial
Mar 08, 2018.
Hello all! In this tutorial, we will learn how to create a Stop Watch android app, which will have basic features of a stop watch like, Start, Stop, Pause and Reset.
Getting Started With SharePoint Framework (SPFX)
Mar 08, 2018.
In this article, I have explained how to set up SharePoint framework development environment, and how to build a SharePoint framework web part from scratch..
Getting Started With Angular 2 Using Angular CLI
Mar 01, 2018.
In this article, I will demonstrate how to install Angular CLI and how to set up an Angular project and run it.
Kick Start With Azure Cosmos DB
Mar 01, 2018.
In this article, we will discuss Azure Cosmos DB.
Basic Templating Using Node.js And Express
Feb 19, 2018.
Previously we learned about how to simply start up with nodejs & implement package manager. Below link you can have an overview on startup NodeJS.
Getting Started With Microsoft Academic Knowledge Using Cognitive Services
Feb 17, 2018.
Microsoft Academic is a free public search engine for academic publications and literature developed by Microsoft Research. This library has 375 million titles ,170 million of which are academic papers. An Angular 5 Application Gets Started Or Loaded
Feb 09, 2018.
Now, we will try to understand how an Angular application is loaded and gets started...
WPF - File Menu User Control
Feb 08, 2018.
This article is about the development of WPF File Menu User control..
Getting Started With Razor Pages In ASP.NET Core 2.0
Feb 01, 2018.
Today, we will talk about more about Razor pages - what actually a Razor page is, how to create Razor page applications, and some of the fundamentals of Razor pages.
Getting Started With "ng-bootstrap" In Angular 5 App
Jan 29, 2018.
In this article, we are going to cover “how to install and setup ng-bootstrap in our Angular 5 apps.”.
Getting Started With OpenGL Win32
Jan 27, 2018.
To get started with OpenGL using GLUT, read this article..
Xamarin.Forms - Pages
Jan 23, 2018.
In the previous chapter, I explained how you can prepare your environment for Android or iOS application development, in this chapter I will start presenting the structure of our page in Xamarin.Forms..
How To Start With Node.js
Jan 22, 2018.
In this post, we will learn about NodeJS. This is the startup post for those who are interested to work with NodeJS but are confused about how to start..
Configure Windows Authentication In ASP.NET Core
Jan 11, 2018.
Using Windows authentication, users are authenticated in ASP.NET Core application with help of operating system. Windows Authentication is a very useful in intranet application where users are in same domain.
Create Your First Bot Using Visual Studio 2017 - Step By Step Guide
Jan 11, 2018.
Seeing how fast the companies are adopting the Bots, it is really the best time for you to start learning Bot framework and start adopting Bots for your business..
AI Series - Part One - Registering For Emotion API
Jan 11, 2018.
I will be showing how to start AI Development with Cognitive.
Getting Started With Azure Service Bus
Dec 26, 2017.
From this article you will learn an overview of Azure service bus and ow to create an Azure service bus namespace using the Azure portal.
Understand HTTP.sys Web Server In ASP.NET Core
Dec 19, 2017.
HTTP.sys is a Windows-based web server for ASP.NET Core. It is an alternative to Kestrel server and it has some features that are not supported by Kestrel.. ..
Mounting Azure File Share With Windows
Nov 30, 2017.
This article shows how to create an Azure Storage account
Building Windows 10 App Using UWP
Nov 30, 2017.
Build a Windows 10 application that runs anywhere using the Universal Windows Platform.
About Windows-8-Start-Screen
NA
Hire a remote developer
Looking to add more bandwidth to your software team? Web Developers, designers, testers are now available on demand. Flexible hours, very competitive rates, expert team and High Quality work.
|
https://www.c-sharpcorner.com/tags/Windows-8-Start-Screen
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
This is one of the 100 recipes of the IPython Cookbook, the definitive guide to high-performance scientific computing and data science in Python.
In order to run this recipe, you will need the following packages:
On Windows, you can find binary installers for all those packages except descartes on Chris Gohlke's webpage. ()
Installing descartes is easy:
pip install descartes.
On other systems, you can find installation instructions on the projects' websites. GDAL/OGR is a C++ library that is required by fiona. The other packages are regular Python packages.
Finally, you need to download the Africa dataset on the book's website. ()
import numpy as np import matplotlib.pyplot as plt import matplotlib.collections as col from mpl_toolkits.basemap import Basemap import fiona import shapely.geometry as geom from descartes import PolygonPatch %matplotlib inline
# natural earth data countries = fiona.open("data/ne_10m_admin_0_countries.shp")
africa = [c for c in countries if c['properties']['CONTINENT'] == 'Africa']
m = Basemap(llcrnrlon=-23.03, llcrnrlat=-37.72, urcrnrlon=55.20, urcrnrlat=40.58)
def _convert(poly, m): if isinstance(poly, list): return [_convert(_, m) for _ in poly] elif isinstance(poly, tuple): return m(*poly)
for _ in africa: _['geometry']['coordinates'] = _convert( _['geometry']['coordinates'], m)
PatchCollectionobjects from the Shapefile dataset loaded with fiona. We use Shapely and descartes to do this.
def get_patch(shape, **kwargs): """Return a matplotlib PatchCollection from a geometry object loaded with fiona.""" # Simple polygon. if isinstance(shape, geom.Polygon): return col.PatchCollection([PolygonPatch(shape, **kwargs)], match_original=True) # Collection of polygons. elif isinstance(shape, geom.MultiPolygon): return col.PatchCollection([PolygonPatch(c, **kwargs) for c in shape], match_original=True)
def get_patches(shapes, fc=None, ec=None, **kwargs): """Return a list of matplotlib PatchCollection objects from a Shapefile dataset.""" # fc and ec are the face and edge colors of the countries. # We ensure these are lists of colors, with one element # per country. if not isinstance(fc, list): fc = [fc] * len(shapes) if not isinstance(ec, list): ec = [ec] * len(shapes) # We convert each polygon to a matplotlib PatchCollection # object. return [get_patch(geom.shape(s['geometry']), fc=fc_, ec=ec_, **kwargs) for s, fc_, ec_ in zip(shapes, fc, ec)]
def get_colors(field, cmap): """Return one color per country, depending on a specific field in the dataset.""" values = [country['properties'][field] for country in africa] values_max = max(values) return [cmap(v / values_max) for v in values]
plt.figure(figsize=(8,6)); # Display the countries color-coded with their population. ax = plt.subplot(121); m.drawcoastlines(); patches = get_patches(africa, fc=get_colors('POP_EST', plt.cm.Reds), ec='k') for p in patches: ax.add_collection(p) plt.title("Population"); # Display the countries color-coded with their population. ax = plt.subplot(122); m.drawcoastlines(); patches = get_patches(africa, fc=get_colors('GDP_MD_EST', plt.cm.Blues), ec='k') for p in patches: ax.add_collection(p) plt.title("GDP");
You'll find all the explanations, figures, references, and much more in the book (to be released later this summer).
IPython Cookbook, by Cyrille Rossant, Packt Publishing, 2014 (500 pages).
|
http://nbviewer.jupyter.org/github/ipython-books/cookbook-code/blob/master/notebooks/chapter14_graphgeo/06_gis.ipynb
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
tornado.platform.twisted — Bridges between Twisted and Tornado¶
Bridges between the Twisted reactor and Tornado IOLoop.
This module lets you run applications and libraries written for Twisted in a Tornado application. It can be used in two modes, depending on which library’s underlying event loop you want to use.
This module has been tested with Twisted versions 11.0.0 and newer.
Twisted on Tornado¶
- class
tornado.platform.twisted.
TornadoReactor(io_loop=None)[source]¶
Twisted reactor built on the Tornado IOLoop.
TornadoReactorimplements the Twisted reactor interface on top of the Tornado IOLoop. To use it, simply call
installat the beginning of the application:
import tornado.platform.twisted tornado.platform.twisted.install() from twisted.internet import reactor
When the app is ready to start, call
IOLoop.current().start()instead of
reactor.run().
It is also possible to create a non-global reactor by calling
tornado.platform.twisted.TornadoReactor(io_loop). However, if the
IOLoop.
|
http://www.tornadoweb.org/en/branch4.3/twisted.html
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
idlj [ options ] idlfile
Some earlier releases of the IDL-to-Java compiler were named idltojava.:
MyPOATie tie = new MyPOATie(myDelegate);
For the My interface, the bindings are emitted to /altdir/My.java, etc., instead of ./My.java.:
#include <Embedded.idl>.
By default the compiler does not operate in verbose mode
Version information also appears within the bindings generated by the compiler. Any additional options appearing on the command-line are ignored.
#define symbol.
The fixed IDL type is not supported.
|
http://www.linuxhowtos.org/manpages/1/idlj.htm
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
Managers¶
A
Manager is the interface through which database query operations are
provided to Django models. At least one
Manager exists for every model in
a Django application.
The way
Manager classes work is documented in Making queries;
this document specifically touches on model options that customize
Manager
behavior.
Manager names¶
By default, Django adds a
Manager with the name
objects to every Django
model class. However, if you want to use
objects as a field name, or if you
want to use a name other than
objects for the
Manager, you can rename
it on a per-model basis. To rename the
Manager for a given class, define a
class attribute of type
models.Manager() on that model. For example:
from django.db import models class Person(models.Model): #... people = models.Manager()
Using this example model,
Person.objects will generate an
AttributeError exception, but
Person.people.all() will provide a list
of all
Person objects.
Custom Managers¶
You can use a custom
Manager in a particular model by extending the base
Manager class and instantiating your custom
Manager in your model.
There are two reasons you might want to customize a
Manager: to add extra
Manager methods, and/or to modify the initial
QuerySet the
Manager
returns.
Adding extra Manager methods¶
Adding extra
Manager methods is the preferred way to add “table-level”
functionality to your models. (For “row-level” functionality – i.e., functions
that act on a single instance of a model object – use Model methods, not custom
Manager methods.)
A custom
Manager method can return anything you want. It doesn’t have to
return a
QuerySet.
For example, this custom
Manager offers a method
with_counts(), which
returns a list of all
OpinionPoll objects, each with an extra
num_responses attribute that is the result of an aggregate query:
from django.db import models class PollManager(models.Manager): def with_counts(self): from django.db import connection cursor = connection.cursor() cursor.execute(""" SELECT p.id, p.question, p.poll_date, COUNT(*) FROM polls_opinionpoll p, polls_response r WHERE p.id = r.poll_id GROUP BY p.id, p.question, p.poll_date ORDER BY p.poll_date DESC""") result_list = [] for row in cursor.fetchall(): p = self.model(id=row[0], question=row[1], poll_date=row[2]) p.num_responses = row[3] result_list.append(p) return result_list class OpinionPoll(models.Model): question = models.CharField(max_length=200) poll_date = models.DateField() objects = PollManager() class Response(models.Model): poll = models.ForeignKey(OpinionPoll) person_name = models.CharField(max_length=50) response = models.TextField()
With this example, you’d use
OpinionPoll.objects.with_counts() to return
that list of
OpinionPoll objects with
num_responses attributes.
Another thing to note about this example is that
Manager methods can
access
self.model to get the model class to which they’re attached.
Modifying initial Manager QuerySets¶
A
Manager’s base
QuerySet returns all objects in the system. For
example, using this model:
from django.db import models class Book(models.Model): title = models.CharField(max_length=100) author = models.CharField(max_length=50)
…the statement
Book.objects.all() will return all books in the database.
You can override a
Manager’s base
QuerySet by overriding the
Manager.get_queryset() method.
get_queryset() should return a
QuerySet with the properties you require.
For example, the following model has two
Managers – one that returns
all objects, and one that returns only the books by Roald Dahl:
# First, define the Manager subclass. class DahlBookManager(models.Manager): def get_queryset(self): return super(DahlBookManager, self).get_queryset().filter(author='Roald Dahl') # Then hook it into the Book model explicitly. class Book(models.Model): title = models.CharField(max_length=100) author = models.CharField(max_length=50) objects = models.Manager() # The default manager. dahl_objects = DahlBookManager() # The Dahl-specific manager.
With this sample model,
Book.objects.all() will return all books in the
database, but
Book.dahl_objects.all() will only return the ones written by
Roald Dahl.
Of course, because
get_queryset() returns a
QuerySet object, you can
use
filter(),
exclude() and all the other
QuerySet methods on it.
So these statements are all legal:
Book.dahl_objects.all() Book.dahl_objects.filter(title='Matilda') Book.dahl_objects.count()
This example also pointed out another interesting technique: using multiple
managers on the same model. You can attach as many
Manager() instances to
a model as you’d like. This is an easy way to define common “filters” for your
models.
For example:
class AuthorManager(models.Manager): def get_queryset(self): return super(AuthorManager, self).get_queryset().filter(role='A') class EditorManager(models.Manager): def get_queryset(self): return super(EditorManager, self).get_queryset().filter(role='E') class Person(models.Model): first_name = models.CharField(max_length=50) last_name = models.CharField(max_length=50) role = models.CharField(max_length=1, choices=(('A', _('Author')), ('E', _('Editor')))) people = models.Manager() authors = AuthorManager() editors = EditorManager()
This example allows you to request
Person.authors.all(),
Person.editors.all(),
and
Person.people.all(), yielding predictable results.
Default managers¶
If you use custom
Manager objects, take note that the first
Manager
Django encounters (in the order in which they’re defined in the model) has a
special status. Django interprets the first
Manager defined in a class as
the “default”
Manager, and several parts of Django
(including
dumpdata) will use that
Manager
exclusively for that model. As a result, it’s a good idea to be careful in
your choice of default manager in order to avoid a situation where overriding
get_queryset() results in an inability to retrieve objects you’d like to
work with.
Calling custom
QuerySet methods from the
Manager¶
While most methods from the standard
QuerySet are accessible directly from
the
Manager, this is only the case for the extra methods defined on a
custom
QuerySet if you also implement them on the
Manager:
class PersonQuerySet(models.QuerySet): def authors(self): return self.filter(role='A') def editors(self): return self.filter(role='E') class PersonManager(models.Manager): def get_queryset(self): return PersonQuerySet(self.model, using=self._db) def authors(self): return self.get_queryset().authors() def editors(self): return self.get_queryset().editors() class Person(models.Model): first_name = models.CharField(max_length=50) last_name = models.CharField(max_length=50) role = models.CharField(max_length=1, choices=(('A', _('Author')), ('E', _('Editor')))) people = PersonManager()
This example allows you to call both
authors() and
editors() directly from
the manager
Person.people.
Creating
Manager with
QuerySet methods¶
In lieu of the above approach which requires duplicating methods on both the
QuerySet and the
Manager,
QuerySet.as_manager() can be used to create an instance
of
Manager with a copy of a custom
QuerySet’s methods:
class Person(models.Model): ... people = PersonQuerySet.as_manager()
The
Manager instance created by
QuerySet.as_manager() will be virtually
identical to the
PersonManager from the previous example.
Not every
QuerySet method makes sense at the
Manager level; for
instance we intentionally prevent the
QuerySet.delete() method from being copied onto
the
Manager class.
Methods are copied according to the following rules:
- Public methods are copied by default.
- Private methods (starting with an underscore) are not copied by default.
- Methods with a
queryset_onlyattribute set to
Falseare always copied.
- Methods with a
queryset_onlyattribute set to
Trueare never copied.
For example:
class CustomQuerySet(models.QuerySet): # Available on both Manager and QuerySet. def public_method(self): return # Available only on QuerySet. def _private_method(self): return # Available only on QuerySet. def opted_out_public_method(self): return opted_out_public_method.queryset_only = True # Available on both Manager and QuerySet. def _opted_in_private_method(self): return _opted_in_private_method.queryset_only = False
from_queryset¶
For advanced usage you might want both a custom
Manager and a custom
QuerySet. You can do that by calling
Manager.from_queryset() which
returns a subclass of your base
Manager with a copy of the custom
QuerySet methods:
class BaseManager(models.Manager): def manager_only_method(self): return class CustomQuerySet(models.QuerySet): def manager_and_queryset_method(self): return class MyModel(models.Model): objects = BaseManager.from_queryset(CustomQuerySet)()
You may also store the generated class into a variable:
CustomManager = BaseManager.from_queryset(CustomQuerySet) class MyModel(models.Model): objects = CustomManager()Manager() class Meta: abstract = True
If you use this directly in a subclass,
objects will be the default
manager if you declare no managers in the base class:
class ChildA(AbstractBase): # ... # This class has CustomManager as the default manager. pass
If you want to inherit from
AbstractBase, but provide a different default
manager, you can provide the default manager on the child class:
class ChildB(AbstractBase): # ... # An explicit default manager. default_manager = OtherManager()
Here,
default_manager is the default. The
objects manager is
still available, since it’s inherited. It just isn’t used as the default.
Finally for this example, suppose you want to add extra managers to the child
class, but still use the default from
AbstractBase. You can’t add the new
manager directly in the child class, as that would override the default and you would
have to also explicitly include all the managers from the abstract base class.
The solution is to put the extra managers in another base class and introduce
it into the inheritance hierarchy after the defaults:
class ExtraManager(models.Model): extra_manager = OtherManager() class Meta: abstract = True class ChildC(AbstractBase, ExtraManager): # ... # Default manager is CustomManager, but OtherManager is # also available via the "extra_manager" attribute. pass
Note that while you can define a custom manager on the abstract model, you can’t invoke any methods using the abstract model. That is:
ClassA.objects.do_something()
is legal, but:
AbstractBase.objects.do_something()
will raise an exception. This is because managers are intended to encapsulate
logic for managing collections of objects. Since you can’t have a collection of
abstract objects, it doesn’t make sense to be managing them. If you have
functionality that applies to the abstract model, you should put that functionality
in a
staticmethod or
classmethod on the abstract model.
Implementation concerns¶
Whatever features you add to your custom
Manager, it must be
possible to make a shallow copy of a
Manager instance; i.e., the
following code must work:
>>> import copy >>> manager = MyManager() >>> my_copy = copy.copy(manager)
Django makes shallow copies of manager objects during certain queries; if your Manager cannot be copied, those queries will fail.
This won’t be an issue for most custom managers. If you are just
adding simple methods to your
Manager, it is unlikely that you
will inadvertently make instances of your
Manager uncopyable.
However, if you’re overriding
__getattr__ or some other private
method of your
Manager object that controls object state, you
should ensure that you don’t affect the ability of your
Manager to
be copied. remember._queryset() method and filter out any rows, Django
will return incorrect results. Don’t do that. A manager that filters results
in
get_queryset() is not appropriate for use as an automatic manager.
|
https://docs.djangoproject.com/en/1.8/topics/db/managers/
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
import.javax.swing.*;
A newbie could stare at this for hours before finally asking for help, here, at the JavaRanch.com. Immediately, they would get the answer that there is a dot that is out of place.
import javax.swing.*;
Giving help such as this does not break the "Do your own homework" rule and it helps the student.
There was another question that I helped someone with that was. Why won't my code compile?!
public static void main(String[] arguments);
This is another one that can stump a new student, who thinks that you terminate each line with a semicolon. An online forum, such as the JavaRanch.com is a great place for the student to quickly get their typo corrected to be the following which will compile.
public static void main(String[] arguments) {, or given them hints and clues and trying to head them in the right direction, but they still don't get it.
We can all give them hints and try to send them into the right direction because we don't want to just right the code for them. It would be easy for us to do, but the students don't learn much this way. Sometimes, the student get more confused with several people answering the question, some answers being more right than others and some users using simple language and others using language that is correct but that the student doesn't understand.
What I propose is that for students who are totally lost, that we have a new forum called "Java Mentoring". Experienced Java programmers could be matched with students who are totally lost and need a lot of help, students who are clueless, even after you try to give them some clues.
I suggest that students apply for a mentor and that the JavaRanch matches a student to a mentor for interactive, one-one-one mentoring.
Text chat is more interactive than email.
Audio or audio-video chat is more interactive than text chat.
We could even us screen sharing to help if students and mentors can see each other's screens. Sometimes, new learners don't have the vocabulary or don't use words correctly. Our "lingua franca" would be the Java code and it would help if you could see their actual screen.
As a mentor, I want to quit doing peoples' homework and have a 'hands-off' approach that requires my students to have the advantage of learning with the 'hands-on' approach.
I want to talk them through doing their own homework. I may even do the exercise at the same time, but not just send them my solution (until they have their own solution), but, by doing the homework myself and keeping my solution to myself, I can talk the student through doing their own homework and make it be a true learning experience for them.
So, to the Trailmaster, the Sheriffs, and the Bartenders, I'm asking the question: "Can we create a new forum to match students are totally lost, with a capable mentor that won't do their homework, but with interactively guide them through doing their own homework?
Kaydell
[ March 09, 2008: Message edited by: Kaydell Leavitt ]
JavaBeginnersFaq
"Yesterday is history, tomorrow is a mystery, and today is a gift; that's why they call it the present." Eleanor Roosevelt
People get lost, but with a guide and one-on-one mentoring people who would otherwise fail, can succeed.
Kaydell
Originally posted by Kaydell Leavitt:. . .
When a conversation reaches that point, I tell the individual that they should seek help from their teacher/instructor/professor (if they are not in a structured class, the Cattle Drive is an option). The teacher is better positioned to give them the correct amount of direction as opposed to just giving answers away, since they know what's been covered in class and what the object of the lesson is.
What's more is the teacher needs the feedback so they can correctly judge the pace of the class.
The public nature of the discussions here mean that everyone benefits from questions asked. These forums are searchable so a great number of the people who receive help here are never seen. You lose that in a chatroom or email conversation. Additionally, there are more eyes on the question AND the answers given. If a wrong or outdated answer is given in one of the forums, it usually doesn't take long before someone more experienced chimes in with a more correct answer. In cases like this, the person asking the question, the person who gave the initial (but wrong) answer is helped, and anyone searching the forums will be able to read the whole conversation and benefit from it.
Most of us don't have the time to take on personal proteges.
If this is something you're interested in doing, you might consider posting to the Jobs Wanted or our Blatant Advertising forum.
As Marilyn mentioned our Cattle Ranch comes very close to this and, in my opinion is a better solution.
If you get a minute, you might want to look it over.
[ March 10, 2008: Message edited by: Ben Souther ]
Most of us don't have the time to take on personal proteges.
If this is something you're interested in doing, you might consider posting to the Jobs Wanted or our Blatant Advertising forum.
This is not Blatant Advertising or a Job Wanted. I'm retired and I want to help out students for free because it keeps me busy and I think that I can help students who get stumped.
This is a great forum. But I still think that there is a place to be more interactive, which I can do on my own. I just thought that other people might be willing to work one-on-one too, and that the JavaRanch.com could be the placed to "Get Connected".
When a conversation reaches that point, I tell the individual that they should seek help from their teacher/instructor/professor
My best student has selected to do his semester project to implement his desktop app in Swing. His instructor says that she doesn't know anything about Swing.
My worst student is stressing out because this is her last semester and if she fails Java, she won't graduate.
I think that we have to keep in mind that instructors have a high number of students and that the instructors *can't* give the students that need one-one-one attention, the attention that they need.
When I was a student, there was University Tutorial Services because some people need that one-on-one attention.
Kaydell
[ March 10, 2008: Message edited by: Kaydell Leavitt ]
2) I think that if you experience somebody thrashing and you really want something a bit more interactive, give them your personal contact info and say "let's chat".
|
https://coderanch.com/t/3391/Helping-people-totally-lost
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
Created on 2013-08-26 17:23 by aisaac, last changed 2016-10-14 05:20 by python-dev. This issue is now closed.
The need for weighted random choices is so common that it is addressed as a "common task" in the docs:
This enhancement request is to add an optional argument to random.choice, which must be a sequence of non-negative numbers (the weights) having the same length as the main argument.
+1. I've found myself in need of this feature often enough to wonder why it's not part of the stdlib.
Agreed with the feature request. The itertools dance won't be easy to understand, for many people.
I realize its probably quite early to begin putting a patch together, but here's some preliminary code for anyone interested. It builds off of the "common task" example in the docs and adds in validation for the weights list.
There are a few design decisions I'd like to hash out.
In particular:
- Should negative weights cause a ValueError to be raised, or should they be converted to 0s?
- Should passing a list full of zeros as the weights arg raise a ValueError or be treated as if no weights arg was passed?
[Madison May]
> - Should negative weights cause a ValueError to be raised, or should they be converted to 0s?
> - Should passing a list full of zeros as the weights arg raise a ValueError or be treated as if no weights arg was passed?
Both those seem like clear error conditions to me, though I think it would be fine if the second condition produced a ZeroDivisionError rather than a ValueError.
I'm not 100% sold on the feature request. For one thing, the direct implementation is going to be inefficient for repeated sampling, building the table of cumulative sums each time random.choice is called. A more efficient approach for many use-cases would do the precomputation once, returning some kind of 'distribution' object from which samples can be generated. (Walker's aliasing method is one route for doing this efficiently, though there are others.) I agree that this is a commonly needed and commonly requested operation; I'm just not convinced either that an efficient implementation fits well into the random module, or that it makes sense to add an inefficient implementation.
[Mark Dickinson]
> Both those seem like clear error conditions to me, though I think it would be fine if the second condition produced a ZeroDivisionError rather than a ValueError.
Yeah, in hindsight it makes sense that both of those conditions should raise errors. After all: "Explicit is better than implicit".
As far as optimization goes, could we potentially use functools.lru_cache to cache the cumulative distribution produced by the weights argument and optimize repeated sampling?
Without @lru_cache:
>>> timeit.timeit("x = choice(list(range(100)), list(range(100)))", setup="from random import choice", number=100000)
36.7109281539997
With @lru_cache(max=128):
>>> timeit.timeit("x = choice(list(range(100)), list(range(100)))", setup="from random import choice", number=100000)
6.6788657720007905
Of course it's a contrived example, but you get the idea.
Walker's aliasing method looks intriguing. I'll have to give it a closer look.
I agree that an efficient implementation would be preferable but would feel out of place in random because of the return type. I still believe a relatively inefficient addition to random.choice would be valuable, though.
+1 for the overall idea. I'll take a detailed look at the patch when I get a chance.
The sticking point is going to be that we don't want to recompute the cumulative weights for every call to weighted_choice.
So there should probably be two functions:
cw = make_cumulate_weights(weight_list)
x = choice(choice_list, cw)
This is similar to what was done with string.maketrans() and str.translate().
> A more efficient approach for many use-cases would do the precomputation once, returning some kind of 'distribution' object from which samples can be generated.
I like the idea about adding a family of distribution generators. They should check input parameters and make a precomputation and then generate infinite sequence of specially distributed random numbers.
[Raymond Hettinger]
> The sticking point is going to be that we don't want to recompute the
> cumulative weights for every call to weighted_choice.
> So there should probably be two functions:
> cw = make_cumulate_weights(weight_list)
> x = choice(choice_list, cw)
That's pretty much how I broke things up when I decided to test out optimization with lru_cache. That version of the patch is now attached.
[Serhiy Storchaka]
> I like the idea about adding a family of distribution generators.
> They should check input parameters and make a precomputation and then > generate infinite sequence of specially distributed random numbers.
Would these distribution generators be implemented internally (see attached patch) or publicly exposed?
> Would these distribution generators be implemented internally (see attached patch) or publicly exposed?
See issue18900. Even if this proposition will be rejected I think we should publicly expose weighted choice_generator(). A generator or a builder which returns function are only ways how efficiently implement this feature. Use lru_cache isn't good because several choice generators can be used in a program and because it left large data in a cache long time after it was used.
> Use lru_cache isn't good because several choice generators can be used in a program and because it left large data in a cache long time after it was used.
Yeah, I just did a quick search of the stdlib and only found one instance of lru_cache in use -- another sign that lru_cache is a bad choice.
> I like the idea about adding a family of distribution generators
Let's stay focused on the OP's feature request for a weighted version of choice().
For the most part, it's not a good idea to "just add" a family of anything to the standard library. We wait for user requests and use cases to guide the design and error on the side of less, rather than more. This helps avoid bloat. Also, it would be a good idea to start something like this as a third-party to module to let it iterate and mature before deciding whether there was sufficient user uptake to warrant inclusion in the standard library.
For the current request, we should also do some research on existing solutions in other languages. This isn't new territory. What do R, SciPy, Fortran, Matlab or other statistical packages already do? Their experiences can be used to inform our design. Alan Kay's big criticism of Python developers is that they have a strong propensity invent from scratch rather than taking advantage of the mountain of work done by the developers who came before them.
> What do R, SciPy, Fortran, Matlab or other statistical packages already do?
Numpy avoids recalculating the cumulative distribution by introducing a 'size' argument to numpy.random.choice(). The cumulative distribution is calculated once, then 'size' random choices are generated and returned.
Their overall implementation is quite similar to the method suggested in the python docs.
>>> choices, weights = zip(*weighted_choices)
>>> cumdist = list(itertools.accumulate(weights))
>>> x = random.random() * cumdist[-1]
>>> choices[bisect.bisect(cumdist, x)]
The addition of a 'size' argument to random.choice() has already been discussed (and rejected) in Issue18414, but this was on the grounds that the standard idiom for generating a list of random choices ([random.choice(seq) for i in range(k)]) is obvious and efficient.
Honestly, I think adding weights to any of the random functions are trivial enough to implement as is. Just because something becomes a common task does not mean it ought to be added to the stdlib.
Anyway, from a user point of view, I think it'd be useful to be able to send a sequence to a function that'll weight the sequence for use by random.
Just ran across a great blog post on the topic of weighted random generation from Eli Bendersky for anyone interested:
The proposed patch add two methods to the Random class and two module level functions: weighted_choice() and weighted_choice_generator().
weighted_choice(data) accepts either mapping or sequence and returns a key or index x with probability which is proportional to data[x].
If you need several elements with same distribution, use weighted_choice_generator(data) which returns an iterator which produces random keys or indices of the data. It is more faster than calling weighted_choice(data) repeatedly and is more flexible than generating a list of random values at specified size (as in NumPy).
Should this really be implemented using the cumulative distribution and binary search algorithm? Vose's Alias Method has the same initialization and memory usage cost (O(n)), but is constant time to generate each sample.
An excellent tutorial is here:
Thank you Neil. It is interesting.
Vose's alias method has followed disadvantages (in comparison with the roulette wheel selection proposed above):
1. It operates with probabilities and uses floats, therefore it can be a little less accurate.
2. It consumes two random number (an integer and a float) for generating one sample. It can be fixed however (in the cost of additional precision lost).
3. While it has same time and memory O(n) cost for initialization, it has larger multiplication, Vose's alias method requires several times larger time and memory for initialization.
4. It requires more memory in process of generating samples.
However it has an advantage. It really has constant time cost to generate each sample.
Here are some benchmark results. "Roulette Wheel" is proposed above implementation. "Roulette Wheel 2" is its modification with normalized cumulative sums. It has twice more initialization time, but 1.5-2x faster generates each sample. "Vose's Alias" is an implementation of Vose's alias method directly translated from Java. "Vose's Alias 2" is optimized implementation which uses Python specific.
Second column is a size of distribution, third column is initialization time (in milliseconds), fourth column is time to generate each sample (in microseconds), fifth column is a number of generated samples after which this method will overtake "Roulette Wheel" (including initialization time).
Roulette Wheel 10 0.059 7.165 0
Roulette Wheel 2 10 0.076 4.105 5
Vose's Alias 10 0.129 13.206 -
Vose's Alias 2 10 0.105 6.501 69
Roulette Wheel 100 0.128 8.651 0
Roulette Wheel 2 100 0.198 4.630 17
Vose's Alias 100 0.691 12.839 -
Vose's Alias 2 100 0.441 6.547 148
Roulette Wheel 1000 0.719 10.949 0
Roulette Wheel 2 1000 1.458 5.177 128
Vose's Alias 1000 6.614 13.052 -
Vose's Alias 2 1000 3.704 6.531 675
Roulette Wheel 10000 7.495 13.249 0
Roulette Wheel 2 10000 14.961 6.051 1037
Vose's Alias 10000 69.937 13.830 -
Vose's Alias 2 10000 37.017 6.746 4539
Roulette Wheel 100000 73.988 16.180 0
Roulette Wheel 2 100000 148.176 8.182 9275
Vose's Alias 100000 690.099 13.808 259716
Vose's Alias 2 100000 391.367 7.095 34932
Roulette Wheel 1000000 743.415 19.493 0
Roulette Wheel 2 1000000 1505.409 8.930 72138
Vose's Alias 1000000 7017.669 13.798 1101673
Vose's Alias 2 1000000 4044.746 7.152 267507
As you can see Vose's alias method has very large initialization time. Non-optimized version will never overtake "Roulette Wheel" with small distributions (<100000), and even optimized version will never overtake "Roulette Wheel" with small distributions (<100000). Only with very large distributions Vose's alias method has an advantage (when you needs very larger number of samples).
Because for generating only one sample we need a method with fastest initialization we need "Roulette Wheel" implementation. And because large distributions are rare, I think there is no need in alternative implementation. In worst case for generating 1000000 samples from 1000000-elements distribution the difference between "Roulette Wheel" and "Vose's Alias 2" is a difference between 20 and 11 seconds.
Serhiy, from a technical standpoint, your latest patch looks like a solid solution. From an module design standpoint we still have a few options to think through, though. What if random.weighted_choice_generator was moved to random.choice_generator and refactored to take an array of weights as an optional argument? Likewise, random.weighted_choice could still be implemented with an optional arg to random.choice. Here's the pros and cons of each implementation as I see them.
Implementation: weighted_choice_generator + weighted_choice
Pros:
Distinct functions help indicate that weighted_choice should be used in a different manner than choice -- [weighted_choice(x) for _ in range(n)] isn't efficient.
Can take Mapping or Sequence as argument.
Has a single parameter
Cons:
Key, not value, is returned
Requires two new functions
Dissimilar to random.choice
Long function name (weighted_choice_generator)
Implementation: choice_generator + optional arg to choice
Pros:
Builds on existing code layout
Value returned directly
Only a single new function required
More compact function name
Cons:
Difficult to support Mappings
Two args required for choice_generator and random.choice
Users may use [choice(x, weights) for _ in range(n)] expecting efficient results
I think Storchaka's solution is more transparent and I agree with him on the point that the choice generator should be exposed.
> I think Storchaka's solution is more transparent and I agree with him on the point that the choice generator should be exposed.
Valid point -- transparency should be priority #1
Most existing implementation produce just index. That is why weighted_choice() accepts singular weights list and returns index. On the other hand, I think working with mapping will be wished feature too (especially because Counter is in stdlib). Indexable sequences and mappings are similar. In both cases weighted_choice() returns value which can be used as index/key of input argument.
If you need choice an element from some sequence, just use seq[weighted_choice(weights)]. Actually weighted_choice() has no common code with choice() and has too different use cases. They should be dissimilar as far as possible. Perhaps we even should avoid the "choice" part in function names (are there any ideas?) to accent this.
You have me convinced, Serhiy. I see the value in making the two functions distinct.
For naming purposes, perhaps weighted_index() would be more descriptive.
Closed issue 22048 as a duplicate of this one.
Raymond, what is your opinion?
I don't want to speak for Raymond, but the proposed API looks good, and it seems "Roulette Wheel 2" should be the implementation choice given its characteristics (simple, reasonably good and balanced performance).
"Roulette Wheel 2" has twice slower initializations than "Roulette Wheel", but then generates every new item twice faster.
It is possible to implement hybrid generator, which yields first item using "Roulette Wheel", and then rescales cumulative_dist and continues with "Roulette Wheel 2". It will be so fast as "Roulette Wheel" for generating only one item and so fast as "Roulette Wheel 2" for generating multiple items.
The setup cost of RW2 should always be within a small constant multiplier of RW's, so I'm not sure it's worth the hassle to complicate things. But it's your patch :)
Non-generator weighted_choice() function is purposed to produce exactly one item. This is a use case for such optimization.
Updated patch. Synchronized with tip and added optimizations.
I'm adverse to adding the generator magic and the level of complexity in this patch. Please aim for the approach I outlined above (one function to build cumulative weights and another function to choose the value).
Since this isn't a new problem, please take a look at how other languages and packages have solved the problem.
Other languages have no such handly feature as generators. NumPy provides the size parameter to all functions and generates a bunch of random numbers at time. This doesn't look pythonic (Python stdlib prefers iterators).
I believe a generator is most Pythonic and most handly solution of this issue on Python. And it is efficient enough.
I agree with Serhiy. There is nothing "magic" about generators in Python. Also, the concept of an infinite stream of random numbers (or random whatevers) is perfectly common (/dev/urandom being an obvious example); it is not a notion we are inventing.
By contrast, the two-function approach only makes things clumsier for people since they have to remember to combine them.
When I get a chance, I'll work up an approach that is consistent with the rest of the module in terms of implementation, restartability, and API.
I)
Reopen this idea but removing the generator from weighted choice.
The entire function of weighted choice. I removed the generator and replaced it by adding an optional argument to specify an amount by which you want to call this function.
Thanks for the patch. Can I get you to fill out a contributor agreement?
What is wrong with generators?
Hello rhettinger. I filled out the form thanks for letting me know about it. Is there anything else I have to do?
Hey serhiy.storchaka
There were several things "wrong" with the previous implementation in my opinion.
1st they tried to add too much. Which would if allowed would clutter up the random library if every function had both it's implementation as well as an accompanied generator. The other problem being that both were attempted to be made as callable to the public API. I would prefer the generator if present to be hidden and would also have to be more sophisticated to be able to check if it was being called with new input.
2nd by adding in the generator to the pulbic API of the random library it makes it far more confusing and obfuscates the true purpose of this function anyways which is to get a weighted choice.
So basically there is nothing wrong with generators but they don't necessarily belong here so I removed it to try to get back to the core principles of what the function should be doing, by making it simpler.
I disagree.
My patch adds two functions because they serve two different purposes. weighted_choice() returns one random value as other functions in the random module. weighted_choice_generator() provides more efficient way to generate random values, since startup cost is more significant than for other random value generators. Generators are widely used in Python, especially in Python 3. If they considered confusing, we should deprecate builtins map(), filter(), zip() and the itertools module at first place.
Your function, Steven, returns a list containing one random value by default. It does not match the interface of other functions in the random module. It matches the interface of NumPy random module. In Python you need two separate functions, one that returns single value, and other that returns a list of values. But returning iterator and generating values by demand is more preferable in Python 3. Generatorsa are more flexible. With weighted_choice_generator() it is easy to get the result of your function: list(islice(weighted_choice_generator(data), amount)). But generating dynamic amount of values with your interface is impossible.
Raymond, if you have now free time, could you please make a review of weighted_choice_generator_2.patch?
Hey serhiy.storchaka
I can edit the code to output just one value if called with simply a list and then return a list of values if called with the optional amount parameter. My code also needs to check that amount >= 1.
My code was mostly just to restart this discussion as I personally like the idea of the function for weighted choice and would like it to be standard in the random library.
I have no qualms with adding both weighted_choice and weighted_choice_generator but my concern is mostly that you are asking too much and it won't go through by trying to add two functions at the same time. The other thing is that I believe that weighted_choice could suffice with just one function call.
I just think my last concern is that generators are different from the other functions in random.py. Whereas they are more intuitive and accepted in the builtins like map and zip etc. There isn't any other functions in the random library that return that type of object when called. They instead return a numerical result.
Those are my concerns and hence why I rewrote the code.
A user can use map(), filter(), zip() without knowing anything about generators. In most cases those function will do their magic and provide a finite number of outputs.
The weighted_choice_generator on the other hand isn't as easy to use. If the user wants 5 values from it, they need to know about `take()` from itertools or call `next()`.
I still like Serhiy's implementation more. A function that returns a list instead of the item is unnatural and doesn't fit with the rest of the module.
I think there's need to be some discussion about use cases. What do users actually want? Maybe post this on the ideas list.
Okay.
I reuploaded the file. The spacing on the if amount < 1 was off. Hopefully its fixed now.
> One to make it return a single number if amount == 1 and the other to check that the amount > 1.
I think that's a dangerous API. Any code making a call to "weighted_choice(..., amount=n)" for variable n now has to be prepared to deal with two possible result types. It would be easy to introduce buggy code that fails in the corner case n = 1.
> One to make it return a single number if amount == 1 and the other to check that the amount > 1.
Suggestion: if you want to go that way, return a single number if `amount` is not provided (so make the default value for `amount` None rather than 1). If `amount=1` is explicitly given, a list containing one item should be returned.
I also think there's no reason to raise an exception when `amount = 0`: just return an empty list.
For comparison, here's NumPy's "uniform" generator, which generates a scalar if the "size" parameter is not given, and an array if "size" is given, even if it's 1.
>>> np.random.uniform()
0.4964992470265117
>>> np.random.uniform(size=1)
array([ 0.64817717])
>>> np.random.uniform(size=0)
array([], dtype=float64)
> Suggestion: if you want to go that way, return a single number if `amount` is not provided (so make the default value for `amount` None rather than 1). If `amount=1` is explicitly given, a list containing one item should be returned.
+1
Re-implemented with suggested improvements taken into account. Thanks @mark.dickinson and @pitrou for the suggestions.
I also removed the redundant "fast path" portion for this code since it doesn't deal with generators anyways.
Let me know additional thoughts about it.
Left in a line of code that was supposed to be removed. Fixed.
Raymond, do you have a time for this issue?
Raymond, any chance to get weighted random choices generator in 3.6? Less than month is left to feature code freeze.
FWIW, I have four full days set aside for the upcoming pre-feature release sprint which is dedicated to taking time to thoughtfully evaluate pending feature requests. In the meantime, I'm contacting Alan Downey for a consultation for the best API for this. As mentioned previously, the generator version isn't compatible with the design of the rest of the module that allows streams to have their state saved and restored at arbitrary points in the sequence. One API would be to create a list all at once (like random.sample does). Another would be to have two steps (like str.maketrans and str.translate). Ideally, the API should integrate neatly with collections.Counter as a possible input for the weighting. Hopefully, Alan can also comment on the relative frequency of small integer weightings versus the general case (the former benefits from a design using random.choice() applied to Counter.elements() and the latter benefits from a design with accumulate() and bisect()). Note, this is a low priority feature (no real demonstrated need, there is already a recipe for it in the docs, and once the best API have been determined, the code is so simple that any of us could implement it in only a few minutes).
Latest draft patch attached (w/o tests or docs).
Incorporates consultation from Alan Downey and Jake Vanderplas.
* Population and weights are separate arguments (like numpy.random.choice() and sample() in R). Matches the way data would arrive in Pandas. Easily extracted from a Counter or dict using keys() and values(). Suitable for applications that sample the population multiple times but using different weights. See and
* Excludes a replacement=False option. That use case necessarily has integer weights and may be better suited to the existing random.sample() rather than trying to recompute a CDF on every iteration as we would have to in this function.
* Allows cumulative_weights to be submitted instead of individual weights. This supports uses cases where the CDF already exists (as in the ThinkBayes examples) and where we want to periodically reuse the same CDF for repeated samples of the same population -- this occurs in resampling applications, Gibbs sampling, and Monte Carlo Markov Chain applications. Per Jake, "MCMC/Gibbs Sampling approaches generally boil down to a simple weighted coin toss at each step" and "It's definitely common to do aggregation of multiple samples, e.g. to compute sample statistics"
* The API allows the weights to be integers, fractions, decimals, or floats. Likewise, the population and weights can be any Sequence. Population elements need not be hashable.
* Returning a list means that the we don't have to save state in mid-stream (that is why we can't use a generator). A list feeds nicely into Counters, mean, median, stdev, etc for summary statistics. Returning a list parallels what random.sample() does, keeping the module internally consistent.
* Default uniform weighting falls back to random.choice() which would be more efficient than bisecting.
* Bisecting tends to beat other approaches in the general case. See
* Incorporates error checks for len(population)==len(cum_weights) and for conflicting specification of both weights and cumulative weights.
There API is not perfect and there are some aspects that give me heartburn. 1) Not saving the computed CDF is waste and forces the user to pre-build the CDF if they want to save it for later use (the API could return both the selections and the CDF but that would be awkward and atypical). 2) For the common case of having small integer weights on a small population, the bisecting approach is slower than using random.choice on a population expanded to include the selections multiple times in proportion to their weights (that said, short of passing in a flag, there is no cheap easy way for this function to detect that case and give it a fast path). 3) Outputting a list is inefficient if all you're doing with result is summarizing it with a Counter, histogram tool, mean, median, or stdev. 4) There is no cheap way to check to see if the user supplied cum_weights is sorted or if the weights contain negative values.
I've gone through the patch -- looks good to me.
New changeset a5856153d942 by Raymond Hettinger in branch 'default':
Issue #18844: Add random.weighted_choices()
Thanks Davin.
1. Returning a list instead of an iterator looks unpythonic to me. Values generated sequentially, there are no advantages of returning a list.
2. An implementation lacks optimizations used in my patch.
3. The documentation still contains a receipt for weighted choice. It is incompatible with new function.
There).
Using a generator doesn't prevents state to be saved and restored.
New changeset 39a4be5e003d by Raymond Hettinger in branch '3.6':
Issue #18844: Make the number of selections a keyword-only argument for random.choices().
Equidistributed examples:
choices(c.execute('SELECT name FROM Employees').fetchall(), k=20)
choices(['hearts', 'diamonds', 'spades', 'clubs'], k=5)
choices(list(product(card_facevalues, suits)), k=5)
Weighted selection examples:
Counter(choices(['red', 'black', 'green'], [18, 18, 2], k=3800)) # american roulette
Counter(choices(['hit', 'miss'], [5, 1], k=600)) # russian roulette
choices(fetch('employees'), fetch('years_of_service'), k=100) # tenure weighted
choices(cohort, map(cancer_risk, map(risk_factors, cohort)), k=50) # risk weighted
Star unpacking example:
transpose = lambda s: zip(*s)
craps = [(2, 1), (3, 2), (4, 3), (5, 4), (6, 5), (7, 6), (8, 5), (9, 4), (10, 3), (11, 2), (12, 1)]
print(choices(*transpose(craps), k=10))
Comparative APIs from other languages:
###################################################################
# Flipping a biased coin
from collections import Counter
from random import choices
print(Counter(choices(range(2), [0.9, 0.1], k=1000)))
###################################################################
# Bootstrapping
'From a small statistical sample infer a 90% confidence interval for the mean'
#
from statistics import mean
from random import choices
data = 1, 2, 4, 4, 10
means = sorted(mean(choices(data, k=5)) for i in range(20))
print('The sample mean of {:.1f} has a 90% confidence interval from {:.1f} to {:.1f}'.format(
mean(data), means[1], means[-2]))
New changeset 433cff92d565 by Raymond Hettinger in branch '3.6':
Issue #18844: Fix-up examples for random.choices(). Remove over-specified test.
New changeset d4e715e725ef by Raymond Hettinger in branch '3.6':
Issue #18844: Add more tests
|
http://bugs.python.org/issue18844
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
View Complete Post
Charles Petzold takes an inside look at the flexible bitmap pixel formats offered by the retained-mode graphics features of Windows Presentation Foundation.
Charles Petzold
MSDN Magazine June 2008
The System.Windows.Shapes namespace is Charles Petzold's namespace of choice for rendering two-dimensional vector graphics in WPF. Here he explains why.
Here Jon Schwartz discusses a programming environment designed just for kids.
Jon Schwartz
MSDN Magazine September
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend
|
http://www.dotnetspark.com/links/11096-graphics-bitmaps--rectangles.aspx
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
Here's a quick-and-dirty sample that illustrates foreign inheritance
using Object::InsideOut v1.18:
use strict;
use warnings;
# Borg is a foreign hash-based class
package Borg; {
# A regular hash-based constructor
sub new {
return (bless({}, shift));
}
# A 'get' accessor
sub get_borg
{
my ($self, $data) = @_;
return ($self->{$data});
}
# A 'put' accessor
sub set_borg
{
my ($self, $key, $value) = @_;
$self->{$key} = $value;
}
# A class method
sub assimilate
{
return ('Resistance is futile');
}
}
# Foo is an Object::InsideOut class that inherits from class Borg
package Foo; {
use Object::InsideOut qw(Borg);
# A data field with standard 'get_/set_' accessors
my @foo :Field('Standard'=>'foo');
# Our class's 'contructor'
sub init :Init
{
my ($self, $args) = @_;
# Create a Borg object and inherit from it
my $borg = Borg->new();
$self->inherit($borg);
}
# A class method
sub comment
{
return ('I have no comment to make at this time');
}
}
package main;
# Inheritance works on class m
+ethods
print(Foo->comment(), "\n"); # Call a 'native' class meth
+od
print(Foo->assimilate(), "\n"); # Call a 'foreign' class met
+hod
my $obj = Foo->new(); # Create our object
$obj->set_foo('I like foo'); # Set data inside our object
print($obj->get_foo(), "\n"); # Get data from our object
$obj->set_borg('ID' => 'I am 5-of-7'); # Set data inside inherited ob
+ject
print($obj->get_borg('ID'), "\n"); # Get data from inherited obje
+ct
[download]
You're innovating faster than I can keep up with it. :-)
I'm glad to see this appear finally, as "foreign inheritance" is one of the other big deals of the inside-out technique. (I like that term -- I've been searching for a good term for it for the seminar that I mentioned in Seeking inside-out object implementations, so I'll have to steal it and credit you.)
Can you talk about the design choices you've made a little bit more? From a quick skim of the code, it looks like you're using some sort of facade pattern (er, adaptor pattern, whatever) in the AUTOLOAD to check method calls against the foreign object(s), rather than using a foreign object as the blessed reference directly with the @ISA array. That does allow multiple foreign objects, but at the cost of incremental indirection.
innovating faster than I can keep up with it. :-)
Can you talk about the design choices you've made a little bit more? From a quick skim of the code, it looks like you're using ... AUTOLOAD to check method calls against the foreign object(s), rather than using a foreign object as the blessed reference directly with the @ISA array. That does allow multiple foreign objects, but at the cost of incremental indirection.
The other feature this design provides is encapsulation. The foreign object is hidden from all but the inheriting class's code. This allows the class code to control access to the guts of hash-based objects.
But what happens if you need/want to make explicit calls to a super class?
print( $object->Borg::comment(), "\n" );
[download]
Updated/expanded for clarity
Assume that you have a method of the same name in Borg as you do in Foo:
package Borg;
sub comment {
return $_[0]->{default_threat};
}
[download]
If I want to explicitly call the Borg version of that method, I'd call it as $obj->Borg::comment(). However, that's equivalent to this:
Borg::comment( $obj );
[download]
Therefore, if Borg::comment is expecting an object of type Borg and intends to muck with its internal structure directly, this will fail since the real Borg object is hidden in a closure in Foo and keyed to $obj and $obj is just the blessed reference for the inside-out object.
Or have I missed something?
Nice!
I'm glad I asked my question now: How to use Inheritance with Inside-Out Classes and Hash Based Classes ;-)
1. Keep it simple
2. Just remember to pull out 3 in the morning
3. A good puzzle will wake me up
Many. I like to torture myself
0. Socks just get in the way
Results (284 votes). Check out past polls.
|
http://www.perlmonks.org/index.pl?node_id=514941
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
> PEP 292 is slated for inclusion in Python 2.4, For completeness, perhaps update the PEP to specify what will happen with $ strings that do not fall under $$, $indentifier, or ${identifier}. For instance, what should happen with: "A dangling $" "A $!invalid_identifier" "A $identfier&followed_by_nonwhitespace_punctuation" > My new stuff provides two classes, dstring() as described in PEP 292 and > astring() as hinted at in the PEP. It also provides two dictionary > subclasses called safedict() and nsdict() which are not required, but > work nicely with dstring() and astring() -- safedict re-expands keys > instead of throwing exceptions, and nsdict does namespace lookup and > attribute path expansion. The names dstring(), astring(), safedict(), and nsdict() could likely be improved to be more suggestive of what they do. > Brett and I (I forget who else was there for this) talked about where to > situate the PEP 292 support. The interesting idea came up to turn the > string module into a package, providing backward support for the > existing string module API, then exporting my PEP 292 modules into this > namespace. This would make the 'import string' useful again since it > would be a place to collect future string related functionality without > having to claim some dumb name like 'stringlib'. I believe we can still > someday deprecate the old string module functions, retaining the useful > constants, as well as new string-y features. . > I also really want to include safedict.py if we're > including pep292.py because they're quite useful and complimentary, IMO, > and I can't think of a better place to put those classes either. Can safedict.safedict() be made more general so that it has value outside of string substitutions. Ideally, the default format would be customizable and would include an option to leave the entry unchanged. Right now, the implementation is specific to string substitution formats. It is not even useful with normal % formatting. > I'm open to suggestions. I have not yet written docs for these new > classes, but will do so once we agree on where they're getting added. > The code and test cases are in python/nondist/sandbox/string. Given the simplicity of the PEP, the sandbox implementation is surprisingly intricate. Is it possible to simplify it with a function based rather than class based approach? I can imagine alternatives which encapsulate the whole idea in something similar to this: import re nondotted = re.compile(r'(\${2})|\$([_a-z][_a-z0-9]*)|\$({[_a-z][_a-z0-9]*})', re.IGNORECASE) dotted= re.compile(r'(\${2})|\$([_a-z][_.a-z0-9]*)|\$({[_a-z][_.a-z0-9]*})', re.IGNORECASE) def _convert(m): 'Convert $ formats to % formats' escaped, straight, bracketed = m.groups() if escaped is not None: return '$' if straight is not None: return '%(' + straight + ')s' return '%(' + bracketed[1:-1] + ')s' def subst(fmtstr, mapping, fmtcode=nondotted, _cache={}): if fmtstr not in _cache: _cache[fmtstr] = _fmtcode.sub(_convert, fmtstr) return _cache[fmtstr] % mapping >>>>> mapping = dict(who='Guido', what='money')) >>> print subst(fmtstr, mapping) Guido owes me $money. Raymond
|
https://mail.python.org/pipermail/python-dev/2004-June/045414.html
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
PHP Gets Namespace Separators, With a Twist 523
jeevesbond writes "PHP is finally getting support for namespaces. However, after a couple hours of conversation, the developers picked '\' as the separator, instead of the more popular '::'. Fredrik Holmström points out some problems with this approach. The criteria for selection were ease of typing and parsing, how hard it was to make a typo, IDE compatibility, and the number of characters."
|
https://slashdot.org/~HeyBob!/tags/diealready
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
Vrije Universiteit Brussel Faculteit Wetenschappen Departement Informatica en Toegepaste Informatica
- Joella Stevens
- 1 years ago
- Views:
Transcription
1 Vrije Universiteit Brussel Faculteit Wetenschappen Departement Informatica en Toegepaste Informatica The use of Semantic Mappings and Ontologies to enrich Virtual Environments Thesis submitted in partial forfilment of the requirements of the degree of Master in Applied Computer Scienes. By: Barry Nauta Promotor: Prof. Dr. Olga De Troyer Supervisor: Dr. Frederic Kleinermann
2 Patience is a virtue: possess it if you can, seldom found in women, never found in men. A mes trois femmes préférées: Geneviève, Elizan et Rosalie Merci pour toute votre patience!
3 Abstract The web is continuously evolving, web 3.0 is on our doorstep. While the predictions differ, most definitions agree about the next evolution in the web. It will include the semantic web. Where the current web is for human consumption only, the next evolution of the web will allow machine-to-machine communication, enabling better processing of the enormous amount of data that is available. Web 3D is also gaining momentum, with native support (removing the need for plugins) in webpages on the way, leading to virtual worlds, either realistic or game oriented. This thesis looks at the possibilities to combine these two technologies. We describe how we use the semantic web, via the use of ontologies, as base to construct the virtual world, which means that the information on virtual objects which we would like to display come from our ontology. Via a semantic mapping, we link this information to a 3-dimensional representation, in other words: we know what we are displaying. Afterwards we show how we can modify this virtual world, in real-time, with external information. External information is often available in legacy formats, so we use translators. This way we can access the external information in the same way as we access our primary ontology. Finally, we will not only use the ontology to provide us with information on virtual objects to display in the virtual world, we will also use it as a dedicated search engine and let it guide us from starting point to our destination, using a graph and graph search-algorithms to guide us from starting point to our destination, based on the query towards the ontology. i
4 ii ABSTRACT
5 Samenvatting (in Dutch) Het web evolueert constant, web 3.0 staat voor de deur. Alhoewel veel definities verschillen zijn de meeste defitities het erover eens dat het semantische web onderdeel is van de volgende evolutie van het web. Het huidige web is gemaakt voor menselijke consumptie, de volgende evolutie van het web zal machine-naarmachine communicatie mogelijk maken en hiermee ook betere verwerking van de enorme hoeveelheid data dat momenteel beschikbaar is. Web 3D wint ook momentum met in-browser ondersteuning zodat er geen plugins meer nodig zijn. Dit eindwerk bekijkt de mogelijkheden om deze technologiën te combineren. We beschrijven hoe we, door middel van ontologiën, het semantisch web gebruiken als basis om onze virtuele wereld te bouwen. Dit houdt in dat de informatie over de virtuele objecten die we tonen van onze ontologie komt. Gebruik makend van een semantische mapping linken we de betreffende informatie aan zijn 3-dimensionale representatie. Ofwel, we weten wat we aan de gebruiker tonen. Daarna tonen we hoe we deze virtuele wereld in real-time aan kunnen passen met externe informatie. Deze informatie is vaak beschikbaar in een oud formaat, zodat deze eerst vertaald moet worden. Op deze manier kunnen we deze informatie op dezelfde manier gebruiken als onze ontologie. Als laatste gebruiken we de ontologie niet alleen om informatie te verkrijgen over de virtuele objecten die we tonen, we gebruiken dezelfde ontologie om ons de weg te tonen in de virtuele wereld. Gebasseerd op het antwoord van de vraag aan de ontologie maken we gebruik van een zoek-algoritme dat ons van het begin-punt naar het eind-punt leidt. iii
6 iv SAMENVATTING (IN DUTCH)
7 Acknowledgements A while ago, I decided to pick up studies again, to correct opportunities I missed in the past. I have often asked myself the question why, but the choice was made, the path was taken. Combining a full-time job, family and studies have been very challenging, and I could not have done this without the support of my wife, Geneviève and my two daughters, Elizan and Rosalie. The countless hours I spent in books and behind my computer has weighed on them. Finishing my studies could not have been possible without their continuous support and love. I would like to take the opportunity to thank Prof. Dr. Olga De Troyer, for giving me the opportunity to realize this thesis. Furthermore, my gratitude goes to Dr. Frederic Kleinermann who has supported, advised me and proof-read my work several times. It is thanks to his guidance that I have been able to write this document. The end of this path is near, I m coming home. v
8 vi ACKNOWLEDGEMENTS
9 Glossary This chapter contains a glossary of terms and abbreviations that are used throughout this document Agent An Agent is just something that acts. But computer agents are expected to have other attributes that distinguish them from mere programs, such as operating under autonomous control, perceiving their environment, persisting over a prolonged time period, adapting to change and being capable of taking on another s goal. [45] AJAX AJAX stands for Asynchronous Javascript And XML. It is a group of related technologies that enable interactive webapplications. Using AJAX, web-applications can retrieve data from a server in an asynchronous, non-blocking manner, and update only a part of the webpage afterwards. API API stands for Application Programming Interface, an API is an interface implemented by an application, enabling other applications to interact with it. AR AR stands for Augmented Reality and describes the technology to view a realworld environment whose elements are augmented by virtual computer-generated imagery. It is used to enhance a users perception of reality. Avatar An avatar is digital representation, often in the form of a one-dimensional username, a 2-dimensional image or 3-dimensional model. vii
10 viii GLOSSARY Collada Collada stands for COLLAborative Design Activity and establishes an interchange file format for interactive 3-dimensional applications. CSS CSS stands for Cascading StyleSheet and is a stylesheet language primarily used to style web-pages that are written in HTML. CTT CTT stands for Concurrent Task Trees, a notation used for task modeling. It is designed to overcome limitations of other notations that are used to design interactive applications. DTD DTD stands for Data Type Definition, it defines the legal structure of an XML document, by listing its legal elements and attributes. Flash Adobe Flash is a multimedia platform used for adding interactivity, video and animations to web-pages. Adobe Flex, a platform to develop Rich Internet Applications is based on Adobe Flex. Flex Adobe Flex is a Software Development Kit (SDK), release by Adobe Systems for development of Rich Internet Applications (RIAs), based on the Adobe Flash platform. Folksonomy A Folksonomy is a collaborative classification, typically established by letting users add tags to resources. Another term for folksonomies is collaborative tagging. HTML HTML stands for HyperText Markup Language, it describes a markup language, a structured way of creating documents, for web-pages. HTTP HTTP stands for HyperText Transfer Protocol, it is a protocol used to distribute information using the internet. It is typically used for, but not limited to, distributing web-pages, videos, images etc from a server to a client.
11 HUD HUD stands for Head-Up Display HUDs provide a 2d component with data on a semi-transparent canvas so it doesn t obstruct the users view. IRI IRI stands for Internationalized Resource Identifier. IRIs are a complement to URIs, using unicode. Java Java is a name for a number of products, it is the name of an object-oriented programming language, but it is also often used as name for the enterprise platform which is also known under the name JEE (Java Enterprise Edition) Javascript Javascript is a scripting language, often used in webpages (client-side Javascript), providing enhanced user-interfaces and dynamic websites. Javascript is unrelated to Java. JavaFX JavaFX is a Java platform for delivering Rich Internet Applications (RIAs) that are cross-platform. Jena Jena is a Java framework for building semantic web applications. Joseki Joseki is an HTTP engine, that supports SPARQL queries. It is based on Jena. JSON JSON stands for JavaScript Object Notation, it is a standard for data interchange, derived from the Javascript language. Although it is derived from Javascript, it is language independant. Landmark A Landmark is a position, typically in a Virtual Environment, that besides a position, also has an orientation, making them useful for navigation. Mashups A Mashup is basically a webpage or an application that combines information from mulitple sources to create a new service. ix
12 x GLOSSARY Metadata Metadata is data about data, often used to indicate what data it is. O3D O3D stands for Open 3D, it is an opensource standard, created by Google, for interactive 3D applications. Ontology An ontology is a formal representation of knowledge, it is used to reason this knowledge. OWL OWL stands for Web Ontology Language, it is a family of languages used to query ontologies. POC POC stands for Proof of Concept, a short (and often incomplete) realization of a certain method of idea, used to demonstrate the principle. POI POI stands for Point Of Interest, it indicates an interesting location, in 3d worlds, they are often attached to viewpoints. RDF RDF stands for Resource Description Framework. It is a family of W3C specifications, designed as a metadata data-model. RIA RIA stands for Rich Internet Application, the term describes webpages that have characteristics of desktop applications. Adobe Flex (Flash), JavaFX and Microsoft Silverlight are the biggest players in this area. RIF RIF stands for Rule Interchange Format, an XML language for expressing rules which computers can execute SAI SAI stands for Scene Authoring Interface, a Javascript API that allows the user to interact with the embedded x3d scene in a web-page.
13 Semantic Web The semantic web represents an evolution of the web in which the semantics of data are defined, making it possible for machines to properly process it. Silverlight Microsoft Silverlight is a web application framework, enabling the creation of Rich Internet Applications (RIAs),,similar to Adobe Flash and JavaFX. SPARQL SPARQL stands for SPARQL Protocol and RDF Query Language It is an RDF query language. Taxonomy A Taxonomy is a classification of objects/things in a hierarchical way (typically a tree-like structure). URI URI stands for Uniform Resource Identifier (RFC2396), a string of characters used for identifying a resource. A URI can either be a URL (Uniform Resource Locator) or a URN (Unified Resource Name). URL URL stands for Uniform Resource Locator, a URL is, like a URN, a URI (Unified Resource Indicator). In this case, it is a URI that specifies where the resource can be located. It is typically used as locator of a web-address. URN URN stands for Uniform Resource Name, and is, like URL, a URI (Unified Resource Indicator). It does not imply the availability of the identified resource. The functional requirements for URNs are described in RFC VE VE stands for Virtual Environment, a computer simulated environment. The term is closely related to Virtual World (VW) and Virtual Reality (VR). VR VR stands for Virtual Reality, it is a term that applies to computer simulated environments that mimic either real-life or imaginary places. xi
14 xii GLOSSARY VRML VRML stands for Virtual Reality Modeling Language and is a standard file format for representing 3D objects, especially designed for the World Wide Web. It has been superseded by X3D. VW VW stands for Virtual World, it is a computer simulated environment, through which users can interact and create objects, the term has become a synonym for 3D Virtual Environments. Some, but not all VWs allow multiple users. W3C W3C stands for World Wide Web Consortium, it is the principle international standards organization for the World Wide Web (WWW). Web 2.0 Web 2.0 is a term that is often used to indicate web-applications that allow information sharing, user-centric design, collaboration and more. Web 3.0 Web 3.0 represents the future of the web as we currently know it. Many definitions of web 3.0 currently exist, but the most common include the semantic web, some others also include web 3d. WWW WWW stands for World Wide Web, commonly known as the web. It represents a system of interlinked hypertext documents, accessible via the Internet. X3D X3D stands for extensible 3D, an open standards file format and runtime architecture to represent and communicate 3D scenes and objects using XML. It is the successor of VRML. X3D is an ISO standard. XHTML XHTML is XML based language, it is an extension to HTML, used to write webpages.
15 xiii XML XML stands for extensible Markup Language and is a language that describes data/documents in a form of elements and attributes. When comparing to HTML, we could say that XML is the language that describes what the data is, where HTML describes how the data looks. XMLSchema XML Schema, much like DTDs, describe the structure of an XML document. XQuery XQuery is an XML Query language. XQuery can be used to query (finding and extracting) XML files for its data (elements and attributes), it is built on top of XPath expression. XPath XPath stands for XML Path Language and is a language that is used by XSLT to access (or refer to) specific parts in an XML document. 1 XSLT XSLT stands for extensible Stylesheet Language Transformation; an XML based stylesheet that defines the transformation from one XML document to another document (which can be another XML document, but it can also be CSV-based, PDF etc). 1 XPath is also used by XLink ( XML Linking ), which is a specification that allows elements to be inserted into XML documents in order to describe links between resources.
16 xiv GLOSSARY
17 Contents Abstract i Samenvatting (in Dutch) iii Acknowledgements v Glossary vii List of Figures xix List of Tables xxii 1 Introduction Aim of this thesis Structure of this thesis Related work - Setting the Scene Technology The World Wide Web xv
18 xvi CONTENTS Web Web Web The Semantic web Ontology Ontology Reasoning and Ontology Inference Web resources XML RDF, RDFS Directed edge labeled graphs OWL SPARQL Web3d Scene VRML X3D WebGL O3D Collada HTML XHTML HTML
19 CONTENTS xvii 2.2 Scientific and industrial works Overview of the approach Approach Ontology, Virtual World and Semantic Mapping Towards ontology mashups Navigation Resulting overview RDF Data Bus Why use semantic web technologies? Discussion PathManager Semantic mapping extended Navigation by query Constructing the navigation paths Approach implementation - a case study The technical building blocks The ontology Web Ontology Language SPARQL Other university ontologies
20 xviii CONTENTS External information The Virtual World - The campus in 3D extensible 3D Scene Access Interface Google Sketchup Vivaty Studio Semantic mapping External information PathManager The User Interface The result Guided tour - Esplanada Query based navigation Considered approaches Existing similar paraverses Overall limitations Semantic mapping Adding new objects Changing the queries PathManager Conclusion 69
21 CONTENTS xix 7.1 Future work A Model simplification 73 B Source code - Embedding binary X3D 75 C Utilities 77 C.1 Ontology C.2 3D modeling C.3 Application D METAR information 79 D.1 Input D.2 Result Bibliografie 81
22 xx CONTENTS
23 List of Figures 2.1 Tim Berners Lee - The Semantic Web, layered cake A RDF graph containing information on Persons A table containing relational information on Persons OWL 1 Profiles - the onion OWL 2 Profiles Susan Kish - Three separate kinds of Virtual Worlds Microvision Heads-Up Display, HUDs in vehicles A Java 3D Scene Graph is a DAG (Directed Acylclic Graph) X3D Profiles Moving from a loose plugin-based Scene-Access-Interface (SAI) to the tightly integrated X3DOM model O3D Software Stack - plugin and future version Application design - high level overview Tim Berners-Lee - The RDF Data Bus The map represented as graph xxi
24 xxii LIST OF FIGURES 5.1 Application design - high level overview X3D System Architecture The map represented as graph Concurrent Task Tree of the user interface Screenshot of the application in action A.1 Google sketchup - single building, original Sketchup version A.2 Google sketchup - single building, simplified A.3 Google sketchup - single building, simplified, textures applied.. 74
25 List of Tables 2.1 Browser plugin statistics Compression differences between VRML, X3D and X3D compressed
26 2 LIST OF TABLES
27 Chapter 1 Introduction The web is a big resource of information, ranging from text, videos, 2-dimensional images to 3-dimensional virtual worlds. It has come from a static web, where a lot of textual information was available, to a dynamic web. People used to browse the information, now they are contributing and increase the information that is available. With the web becoming more dynamic, people started building social networks for both pleasure and professional activities. When we take into account that the number of users on the web is still increasing, it becomes clear that the amount of information of the web increases rapidly. The increasing information leads to new challenges. How do we sort this information, how do we visualize it? What can we do to help the users finding information, other than keyword matching which is the base for most search-engines at the moment. To answer to these challenges, new techniques and approaches have been developed and researched. In fact, a lot of research is still being done. These techniques and approaches are grouped under the label of semantic web technology, techniques that help to classify and interpret information that is available on the web, so that machines can understand the meaning and reason on this information. Besides this, we are also assisting to the possibility to have Virtual Environments (VEs) over the web, due to new technologies. Virtual Environments are 3-dimensional virtual worlds, containing 3d objects in which users can navigate and interact with objects or other users. The most famous VE is Second Life. Second Life is an exceptional case, since most VEs have never been very success- 1
28 2 CHAPTER 1. INTRODUCTION ful and even Second Life is no longer as successful as it used to be. Some state that the failure of success of VEs is related to the fact that they are visually not as attractive as video games. While this might be partially true, the visual quality has improved a lot, we can now have attractive VEs. VEs are also very expensive to develop and are therefor not accessible to web authors, this might be another reason for the lack of success. The fact remains that the web is broadcasting information continuously, and a lot of this information is about social networks. This implies that VEs must incorporate this information into their environments in order to have a chance to be used over the web. Besides the ability to use this information, VEs must also adapt to what the user does. In other words, some kind of real-time customization should exist. This will facilitate the way users use the VE. A general view should be that VEs should be developed in the spirit of mashups, where users can combine the different types of information and use it to customize VEs and to display information that is relevant to potential users. The aim of this thesis is to explore how semantic web technologies can be used to provide this facility to webbased VEs. This exploration is done by implementing a concrete scenario, based on the campus of the Vrije Universiteit Brussel. 1.1 Aim of this thesis The aim of this thesis is to explore how we can enrich VEs with information so we can have more user specific VEs, with information coming from webresources and how we can help users to find his way through this information in a 3-dimensional environment. This is a challenging task, for several reasons. The first challenge is related to the fact that 3-dimensional information contains information on shapes only. This information tells the computer how to display the information from a geometrical and material point of view. The computer cannot provide any meaning to what it displays and therefor it is up to the user to derive this based on shapes and context. The second challenge is to have a VE having a virtual world that adapts to both the user and the external information, which can be obtained in real-time and typically changes rapidly. We can create a new source of information by combining different sources, the question remains how we can use and visualize this information inside the VE without completely rewriting it. With this increasing amount
29 1.2. STRUCTURE OF THIS THESIS 3 of information, the third challenge is how users can navigate in a convenient way. This thesis will introduce an approach that addresses some of these challenges using semantic web technologies with the aim of having VEs that can adapt quickly to new information to be used. 1.2 Structure of this thesis This thesis is structured as follows: 1. Chapter 1 provides an introduction to the context in which the research work is being conducted. 2. Chapter 2 provides related work where enabling technologies and research results are discussed. 3. Chapter 3 describes an approach to have dynamic Virtual Environments, based on ontologies. 4. Chapter 4 explains a way we use the ontology to improve navigational issues. 5. Chapter 5 provides a case study, based on the campus of the Vrije Universiteit Brussel. 6. Chapter 6 discusses the overall limitations of our approach. 7. Chapter 7 provides a conclusion and provides some ideas for future work.
30 4 CHAPTER 1. INTRODUCTION
31 Chapter 2 Related work - Setting the Scene This chapter provides related work, it is split into two parts. The first part explains existing technologies that are used in this thesis, the second part of this chapter discusses related work, both scientific and industrial, to the contents of this thesis. 2.1 Technology This section highlights some of the technologies that play a role ranging from an Internet point-of-view to more specific web-related technologies and 3d technologies. Besides discussing the current technologies, this section also mentiones previous technologies that have led to the current state of technology. Furthermore, it describes the emerging technologies which are used in this thesis in more detail. 5
32 6 CHAPTER 2. RELATED WORK - SETTING THE SCENE The World Wide Web The Internet, or the world wide web, or the web 1 as we often refer to it, has come a long way. Web 1.0 When the World Wide Web emerged (the first proposal for HyperText was released on November, 12th in 1990 [50]), it contained a lot of hyperlinked, informative pages. These pages were all static in content, there was a lot of information available on the web but understandable only by humans. We now sometimes call this version of the Internet Web 1.0. The rise of Java, and more specifically: Java applets, showed that some interaction with web-pages was possible. Starting with small games, dynamic navigation afterwards and even smaller application within pages later on. These technology changes can be seen as the evolution of the web, creating a broader platform to work with. Slowly, programming languages leveraged the ability to create dynamic pages in a relatively easy way, based on content coming from other sources, like databases, but also coming from user input and more. Dedicated web-frameworks accelerated web-development and at the same time, client-side Javascript, a scripting language often embedded in web-pages which became more and more popular for basic web-page interaction. Internet slowly became a commodity; people started interacting, expressing themselves on the web using forums, taking a web-identity. These were the first indications that something was about to happen. Web 2.0 was on the horizon. The term Web 2.0, was first used by Darcy DuNucci in 1999 [23]: 1 We often see the Internet and The web or The World Wide Web (WWW) as the same thing. It is important to realize that what we call The web refers to only a subset of what we call the Internet. The web refers to web-pages, or from a more technical point of view, namely information in the form of web-pages, shared via the HTTP protocol. The Internet embodies a family of network related protocols like HTTP (principal protocol to send web-pages - displayable information across the Internet), FTP (File Transfer Protocol - principle protocol to send files over the net), SMTP (Simple Mail Transfer Protocol - sending mail) etc.
33 2.1. TECHNOLOGY. The term web 2.0 became popular after Tim O Reilly [39], hosted the Web 2.0 conference for the first time in 2004 Web 2.0 Web 2.0 does not relate to a new version of specifications; instead it can be seen as a combination of several evolutions from a technical point of view. From a business point-of view however, it can be seen as a revolution. The web is no longer about information only, it is now also about collaboration, the web as a platform, rich user experience and more. Tim Berners-Lee calls the term a piece of jargon. He argues that the web was designed to do exactly those things that now are called web 2.0 and that nobody even knows what web 2.0 means [17] Web 2.0 is about collaboration, information sharing, user-centric design, allowing its users to be active; interact with each other as contributors to websites, where before, users could only passively view information on web-pages. Prashant Sharma [48] describes 7 features of web 2.0: User-centered design Customizable pages, fitted to the need of the user. One of the typical examples is igoogle, offering a personalized Google page, where users can add news, weather information, photos and more. Crowd-sourcing Millions of contributions give a website a higher relevance. Typical exam-
34 8 CHAPTER 2. RELATED WORK - SETTING THE SCENE ples include blogging platforms, like Blogger 2 and Wordpress 3 that beat conventional media company by producing extremely frequent and relevant content. Web as platform Web-applications replace more and more desktop functionality. Those webapplications are platform-independent and don t require specific client-side downloads. Google Maps is an excellent example of this aspect. Collaboration One of the most often indicated features of web 2.0 is collaboration. Collaborative information outpaces traditional information sources. A typical example is Wikipedia that provide better and more content than traditional encyclopedia. Power decentralization Web 2.0 follows a self-service model instead of an administrator dependant model. A typical example is Google Adsense where users setup their own advertisement platform without administrator interventions needed. Dynamic content Web 2.0 is also about highly dynamic content. The user-provided information that was listed in crowd-sourcing can be used to lift a site s prestige, it can also help to influence the content. SaaS Cloud application services or Software as a Service deliver software available as webservices without any platform dependencies. As example, we will refer to Google again. Their mail application and on-line office package are excellent examples of software as a service. The following question is easy to foresee: Will there be a next revolution and if so, what will it look like? Web 3.0 Where web 2.0 introduced the first revolution of the web, at least from a business perspective, people are already discussing its successor, web What will And, of course, lots of speculations on web 4.0 already exist.
35 2.1. TECHNOLOGY 9 web 3.0 look like? Regardless of the many definitions that exist, most definitions agree that the semantic web is part of web The Semantic web Tim Berners-Lee mentions [15] that there are two types of information available; Human Readable Information and Machine Readable Information. The Human Readable Information is presented by web-pages, it is the Internet as we currently know it. The Machine Readable Information is data that is explicitly prepared for machine reasoning; part of the semantic web. The semantic web is about linked data on the web, it is about distributed knowledge. It is seen as a future evolution of the web, where the information that is available can be understood by machines and not necessarily humans. The purpose of the semantic web is to enable computers to more easily find information, combining it and act upon it, without the need for human intervention. Figure 2.1 shows the web-standards that enable the semantic web. It shows a picture, also known as the layered cake, were each layer is built on top of the technologies that are referred in the layer underneath. Each layer is general than the layer underneath it. [14] The different building blocks that are shown in this image, often referred to as the semantic web layered cake, require some basic explanations. The technologies that are explicitly used in this thesis will be explained in more detail afterwards: URI/IRI Make sure we use an international characterset (IRI is an abbreviation for Internationalized Resource Identifier ), and provide a way of uniquely identifying resources on the web. In other words: unambiguous names. XML The combination of XML and XML namespaces, as well as XML Schema,
36 10 CHAPTER 2. RELATED WORK - SETTING THE SCENE Figure 2.1: Tim Berners Lee - The Semantic Web, layered cake provide self descriptive documents in a standard way. XML Schema is a layer that restricts the structure of XML documents and additionally extends XML with datatypes. This layer makes sure that we can integrate the definitions with the other XML based standards. This layer is all about syntax. RDF A standard for describing resources on the web, a datamodel for resources and their relationships. RDF is used to describe Metadata, this layer is all about data interchange. RDF Schema (RDFS) A vocabulary for describing properties and classes of RDF resources. With RDF and RDFS it is possible to make statements about objects with URIs and define vocabularies that can be referred to by URIs. This layer is about
37 2.1. TECHNOLOGY 11 taxonomies. 5 SPARQL The language that is used to query ontologies, similar to the way that SQL is used to query relational databases. Ontology: OWL OWL stands for Web Ontology Language 6. This layer supports the evolution of vocabularies, it can define relationships between different concepts. It extends RDF and RDFS, by defining relationships between classes, cardinality etc. Rules: RIF RIF stands for Rule Interchange Format. Rules are basically statements in the form of IF <condition> THEN <conclusion>, RIF provides a family of languages to encapsulate these statements, so computers can execute them. At this moment. the standard is not yet complete. Logic The logic layer enables the writing of rules, currently under active research. Proof The proof layer executes rules, currently under active research. Trust Evaluate whether to trust the given proof. Currently under active research. Crypto Also called the digital signature layer, used for detecting alterations to documents. The layered cake of the semantic web illustrates the fact that standards are built upon other standards, going from syntax through structure, semantics towards proof and trust. Each layer builds upon the previous layer. 5 A taxonomy is a hierarchical way of classifying objects, RDFS allows the notion of hierarchy by introducing inheritance 6 A correct abbreviation of Web Ontology Language would be WOL. Guus Schreiber asks the question: Why not be inconsistent in at least one aspect of a language which is all about consistency It might refer to the Owl of Winie the Pooh, who misspells his name as WOL, or it might be a reference to an Artificial Intelligence project called One World Language [28].
38 12 CHAPTER 2. RELATED WORK - SETTING THE SCENE Ontology The web was originally built to display information in the form of hyperlinked pages. This information is understandable by humans, but although the enormous amount of information is readable by machines, it is not machine-understandable, making it difficult to automate any form of information processing. By adding a layer of additional information that describes the contained data, we can describe the contents that we are actually dealing with. This layer of additional data, also referred to as data about data, is called Metadata. By adding this extra layer of data, we add knowledge to our system. We describe concepts in a certain domain as well as the relationship between those concepts. It is this combination, when properly expressed, that can be understood by machines, it is exactly what entails an ontology. An ontology consists of types, properties and relationship types representing ideas, entities, events along with their properties and relations according to a system of categories. In computer science terms, an ontology refers to an explicit specification of a conceptualization, where a conceptualization is an abstract, simplified view of the world that we wish to represent for some purpose. [26] While it is possible to define any type of ontology, there is a general expectation that ontologies resemble a part of real-life information. [25] So far, it seems that ontologies are a special form of databases, they do not only contain data, but also a description on what exactly is stored. Where databases contain raw data, and have no knowledge on what is stored in its tables, ontologies have this extra layer of data that make the information meaningful. Ontologies are said to be more scalable, since they are built on top of RDF, and RDF is distributed by nature since they use URIs as their foundation, which makes them different from classical databases. The true power of ontologies however, lies in the methods they borrow from Artificial Intelligence, called Reasoning and Inference.
39 2.1. TECHNOLOGY 13 Ontology Reasoning and Ontology Inference Reasoning means that we can draw conclusions by the use of reason. Inference means deriving new facts from existing ones 7. Ontologies contain facts. Facts consists of types, properties and relationship types. Since ontologies are based on facts, we can apply inference to them, although the complexity of inference mechanisms may differ, related to the expressive power of the language that is used [44]. Web resources The URI, or Universal Resource Identifier is one of the most fundamental specifications of Web architecture [15]. The specification basically indicates that anything on the web should be globally and uniquely identifiable, by a string of characters, the URI. The URI is based on the idea by Douglas Englebart [24] called Every Object Addressable, stating that every object should have an unambiguous address, capable of being portrayed in a manner as to be human readable and interpretable. URIs are used to uniquely identify names or resources on the web, using either URNs, Universal Resource Names or URLs, Universal Resource Locators. URNs uses names to uniquely identify resources, a good example is using ISBN to indentify a specific book, URLs uses locations to uniquely identify resources, we can think of a street address, but of course, URLs are typically used to address web-resources. Boht URLs and URNs are specialized cases of a URI, but sometimes it is difficult to categorize a specific schema as either a URL or a URN, since we can use all URIs as names, a URN is a URI that identifies a resource by its name, or we can use URNs to talk about resources by their names, without specifying their location. IRIs, Internationalized Resource Identifiers are a generalization of URIs, since URIs are based on ASCII, a limited characterset allowing only the English alpha- 7 A famous example is the following: Socrates is a man, All men are mortal from which we can derive Socrates is mortal. This form of reasoning is called deductive reasoning
40 14 CHAPTER 2. RELATED WORK - SETTING THE SCENE bet, IRIs are based on the Universal Character Set, Unicode, allowing many more characters, including Arabic, Chinese etc. XML XML, or extensible Markup Language is a markup language that is used for encoding documents, in a structured way. It is most often used to present text and data in a way that is can easily be processed, without the need of human or machine intelligence. A key-point in XML is that it separates form and content. HTML, the language behind webpages, for instance, consists mostly of tags defining the layout of text, in XML tags define the structure and the content of data. A second important point is that XML is extensible, meaning that it is not a fixed format, like HTML for example is. DTDs or Document Type Definitions is a set of markup declarations. They define exactly which kind of element may appear where in a document and what the elements content and attributes are. DTDs are superseded by XML Schema, which also is used to put constraints on XML documents, using a set of rules, with a big difference that XML Schema is written using XML, DTDs are not. RDF, RDFS RDF stands for Resource Description Framework and is a W3C (World Wide Web Consortium) approved specification. It is part of the semantic web layered cake shown in Figure 2.1 RDF is a datamodel used to make statements about resources in the form of socalled triples, a combination of a subject which identifies what object the triple is describing, a predicate, also known as property, which defines the data we are going to give a value to, and an object, the actual value we will give. It is a decomposition of knowledge into smaller pieces. In fact, the goal of RDF is to be as simple as possible, so we can express any fact. On the other hand, it must be very structured, so it is easy for computers to process. The example below contains an informal triple, where the subject is a person, the predicate points to this persons first name and the object, the actual value, is a
41 2.1. TECHNOLOGY 15 literal; it contains the text Geneviève. Person, has firstname, Geneviève Note that each RDF statement should be complete and unique, therefore the subject must use a unique 8 URI. 9, we call this a named resource. Since URIs can become long and unreadable, we often abbreviate them using the concepts of XML namespaces. 10 RDF itself is an abstract datamodel, it comes in two formats, called serializations ; XML (extensible Markup Language) and N3 (Notation 3) for a non-xml representation. The informal example above could be written in RDF/XML format as follows: <?xml version="1.0"?> <rdf:rdf xmlns: <rdf:description rdf: <example:has_firstname>genevieve</example:has_firstname>... </rdf:description>... </rdf:rdf> In a few words; RDF is used to model information that is implemented in web resources. It is a datamodel, making the semantic web that uses RDF, a decentralized platform for distributed knowledge. This in contrast to the current web which is a decentralized platform for distributed information visualization. RDF Schema, or RDFS for short, adds additional semantics to RDF. These semantics are a type system of classes, including inheritance and properties 8 The subject can also be a anonymous node or blank node (also known as bnode) 9 This is a HTTP URI. HTTP stands for HyperText Transfer Protocol, one of the protocols that is used to transfer data over the Internet. Resources that can be accessed via the HTTP protocol are identified by a set of characters, also known as a Universal Resource Identifier (URI). URLs (Universal Resource Locators), often called a web-addresses, are a subset of URIs 10 Namespaces have no significance in RDF, they are a tool to abbreviate long URIs.
42 16 CHAPTER 2. RELATED WORK - SETTING THE SCENE Class (type of resource), property Subclasses and subproperties Domain and range Comments and labels Directed edge labeled graphs The triples of RDF actually construct graphs 11. Figure 2.2 shows a partial RDF graph of a family. Figure 2.2: A RDF graph containing information on Persons In general, the nodes in RDF graphs are things, arcs are relationship between things. 11 A graph refers to a collections of nodes, or vertices, that may be connected. The connections between nodes are called edges. Edges may be directed, from one vertex to another, or undirected. We can also give a weight to edges, for instance the distance or a payload for travelling over the edge. Search-algorithms for the different type of graphs are different.
43 2.1. TECHNOLOGY 17 The notion of graphs is used, because in graph isomorphism theory [20], a lot of research has been done to the problem whether two (sub-)graphs are the same and how they can be merged. The research often deals with unlabeled, undirected graphs. In RDF graphs, this problem becomes relatively easy, since most vertices are labeled with the URI of the resource and most edges have distinct labels from the URI of the property of the triple. RDF compared to a relational database If RDF is actually nothing more than a simple datamodel, why not use a relational database? If RDF is about information on web-resources, aren t we building a web-database? When we look at relational databases, we see that they contain tables. Each table consists of rows (the things we are storing information about) and columns with represent the attributes/properties of those things. The combination of a row and a column gives us the value of a specific attribute of a thing stored in the database. If we look at database containing person information, we can imagine the situation as described in Image 2.3. Figure 2.3: A table containing relational information on Persons
44 18 CHAPTER 2. RELATED WORK - SETTING THE SCENE The rows in the first table represent a Person, the columns represent the attributes we know, and are interested in, of this Person. In the case of the image, the highlighted person has an attribute Firstname with value Geneviève. We can easily see that it is not hard to translate this information into an RDF graph. In fact, Figure 2.3 maps to Image 2.2. The nodes in RDF graphs are things, arcs are relationship between things. Foreign keys simply become a relationship to another entity. This is demonstrated by looking at the second table. The power of using RDF graphs is that the things are identified by URIs, making them web-enabled and additionally, we can uniquely identify them. Nodes with the same URL are considered identical and this can be used to merge graphs. Merging of RDF graphs has no limitations, any RDF graph can be merged with any other RDF graph. Formally, it is said that RDF is monotonic, merging graphs means merging triples of identical nodes. Adding triples never changes the meaning of a graph, which basically means that you cannot invalidate earlier conclusions nor can you unsay statements by adding triples. To answer the question if we are building a web-database, the reply is more-orless. RDF uses URIs as unique identifiers, we use graphs so we can easily merge information. The information is available on the web, which means that it is not centralized. Therefore we can easily modify information and we can modify in parallel, which means that this is a very scalable solution. We could build a webdatabase. The problem lies in the fact that representing information in RDF is often considered a major overhead [18], therefore it is not often applied, so little information is available since people chose not to make their information available via RDF. OWL Web Ontology Language (also known as OWL ) is a language that is built on top of RDF and RDFS. It includes a standard vocabulary for describing properties and classes, like datatypes, relations, cardinality, characteristics of properties etc. There are currently two versions of OWL; OWL 1 released in 2004 [9] and OWL 2 which was released at the end of the year 2009 [8], the latter being backwards compatible, but is still a working draft.
45 2.1. TECHNOLOGY 19 OWL 1 OWL comes in three different sublanguages: OWL Lite OWL Lite supports basic classification hierarchies and constraints. OWL DL OWL DL extends OWL Lite, DL stands for description Logic, OWL DL supports all language constructs, but there are some limitations on transitive properties. OWL Full OWL Full extends OWL Lite and OWL DL Each variant has an increased expressiveness, starting by OWL Lite, through OWL DL and finishing by OWL Full. The language become more expressive, by syntactic extensions of its predecessor. This means that each legal OWL Lite ontology is a legal OWL DL ontology, and each legal OWL DL ontology is a legal OWL Full ontology. [9] The different profiles of OWL 1 are listed in Figure 2.4 Figure 2.4: OWL 1 Profiles - the onion OWL 2 OWL 2 is a revision and an extension of the first version of OWL. The following major features are added to OWL 2: Improved syntax, making relations easier to express More forms of expressiveness
46 20 CHAPTER 2. RELATED WORK - SETTING THE SCENE More datatypes and ranges Metamodeling Annotations OWL 2 supports, just like OWL 1, several profiles. There are three profiles in the new specification: OWL 2 EL OWL 2 EL is targeted to applications employing large ontologies that use a simple format. It keeps expressiveness power and consistency, while providing polynomial reasoning time. OWL 2 QL OWL 2 QL is based on OWL DL (and therefore also OWL Lite) and provides an intersection of RDFS and OWL 2.0 DL. It is targeted to data querying and storage. OWL QL also provides ways to express conceptual models. OWL 2 RL OWL 2 RL uses a syntactic subset of OWL 2 and part of its RDF based axiom semantics. This profile is targeted for applications that favor scalability and performance over expressive power. The different profiles of OWL 2 are listed in Figure 2.5 SPARQL SPARQL stands for SPARQL Protocol and RDF Query Language and is an RDF query language. Like said before, RDF can be seen as the web-database, provided that all information evolves to known ontologies. SPARQL can be seen as the web s query language. Every identifier in SPARQL is a URI, which is a global unique identifier. Using SPARQL, we can query in an unambiguous way, unlike in databases where we typically use a combination of firstname and surname to indicate a person, which is clearly not a unique combination. To avoid this, IDs are often applied as key for the record, making the ID in combination with its table unique in the database only.
47 2.1. TECHNOLOGY 21 Figure 2.5: OWL 2 Profiles When using the same example as earlier; we could imagine the following code query for a persons firstname. PREFIX nauta: <> PREFIX example: <> SELECT?firstName WHERE { nauta:gc example:has_firstname?firstname } Web3d We have discussed the future of the web, which is often called web 3.0. We have stated that most definitions of web 3.0 include the semantic web, but some definitions also include web3d in their definition. Web3d refers to interactive 3d content in web-pages. At this moment, they typically require plugins to display the content, although work is on the way to replace 3d content by Javascript, which is currently the case for O3D, or embed it natively in the web-page. In fact, this is one of the main goals of HTML5, removing the need for plugins. Although web3d refers to any type of 3d content, we will limit this in this paper
48 22 CHAPTER 2. RELATED WORK - SETTING THE SCENE to virtual worlds; 3d worlds that resemble our world, displayable in a browser. Virtual Worlds are not limited to a world as we currently imagine it. We can extend the notion of world to any sort of containing environment, grouping related objects, but we will use the notion of a paraverse throughout this chapter. A Paraverse is a virtual representation of the world, or a part of it, as we know it. In other words, in this document, we will use the term Virtual World and paraverse for the same thing; to describe a simulated environment that resembles a realworld environment. Susan Kish [33] identifies three major emerging types of universes: Massive Multiplayer Online Role Playing Games (MMORPGs) Massive Multi-learner Online Learning Environments (MMOLEs). Virtual environments that are focused on learning platforms, virtual training world, elearning etc. Metaverses are typically virtual worlds that are both social and game oriented. Second Life, a Virtual World allowing users to explore the world, interact, participate in activities and socialize, is the classic example of a Metaverse. Additionally, she mentions two types that are related to the aforementioned: Intraverses, a Virtual World on a corporate Intranet. Paraverse, a Virtual World that tries to resemble a (part of) real-life environment. Paraverses are also called mirror worlds. Google Earth is the classical example in this case. Virtual Worlds (VWs), from a more technical perspective, are computer based environments, typically containing users and virtual objects. Some VWs allow users to interact with other users, some VWs allow users to manipulate objects, some VWs allow a combination of those. Virtual Worlds are not only something we see, we can also participate in it and often people do that in form of Avatars, a digital representation of their presence.
49 2.1. TECHNOLOGY 23 Figure 2.6: Susan Kish - Three separate kinds of Virtual Worlds Avatars It signifies the incarnation or reap- The word Avatar comes from Sanscrite. pearance of a god in the living world. When people are visiting a Virtual World, they are often represented by an avatar, digital representation, often in the form of a one-dimensional username, a 2- dimensional image or 3-dimensional model. Head-Up Displays When using a Virtual World, there is a lot of information available. Often, it is not desirable to display all information in 3d form and for this, Head-Up Displays are often used. Head-Up Displays are used to provide additional means of showing information. HUDs provide a 2d component with data on a semi-transparent canvas so it doesn t obstruct the users view. Initially developed for military purposes, HUDs
50 24 CHAPTER 2. RELATED WORK - SETTING THE SCENE are now also available in commercial aircrafts, cars etc. Figure 2.7: Microvision Heads-Up Display, HUDs in vehicles HUDs are also often available in Virtual Worlds, providing additional information, without cluttering the 3d structures. Scene Anything you would like to represent in 3D is a scene. A scene contains the structure (wireframes, mesh), textures, cameras (viewpoints in the virtual world) etc. that are needed to display the 3d content. Most of the 3d engines share a data structure known as a scene graph, containing all scene information. A scene graph is a hierarchical (tree) structure in which a specific node can have many children, but a child can only have one parent. 12 Nodes in a scene graph are typically one of the following: Group nodes Group nodes are nodes that can contain children. These nodes are typically not visual. Effects that are applied to group nodes are also applied to all its children. Leaf nodes Leaf nodes are nodes that can actually be seen (or for audio; heard) 12 Some scene graphs are implemented as directed acyclic graphs in which children can have multiple parents.
51 2.1. TECHNOLOGY 25 Figure 2.8: A Java 3D Scene Graph is a DAG (Directed Acylclic Graph) VRML VRML stands for Virtual Reality Modeling Language and is a standard file format that is used for describing 3D interactive graphics. VRML is a textbased format that describes edges, polygons, textures, transparency parameters, etc [10]. It also specifies the ability to add sound, animations and more. Special nodes like the Timer node can be used to interact with the scene, likewise, script nodes allows a piece of program code, typically ECMAScript, to be added to the VRML file, which can be executed when certain events are triggered. These events can be timer events, but can also come from user-interaction. VRML was designed to be used in web-pages, it has been superseded by X3D.
52 26 CHAPTER 2. RELATED WORK - SETTING THE SCENE X3D X3D is an acronym of extensible 3D and is an ISO standard specifying the file format for 3D content. It is the successor of VRML, adding extensions to VRML as well as the ability to encode scenes using XML syntax. X3D defines several profiles, each adding some more capabilities to the less rich profiles: 1. X3D Interchange Providing basic features like grouping of objects. 2. X3D Interactive Extending X3D Interchange and adding features that enable interactivity like touchsensors, inline nodes etc. 3. X3D Immersive Adds scripts, audio and more. 4. X3D Full Adds Geospatial, NURBS (Non-uniform rational basis spline, a mathematical model for generating curves and surfaces), H-ANIM (Humanoid Animation) Figure 2.9: X3D Profiles The traditional way of viewing X3D in a browser is by installing a x3d plugin, like Vivaty 13, Octaga or BS Contact. Most of these plugins use a standard called SAI 13 On the 31st of March 2010, Vivaty announced that it was shutting down, their products are currently no longer available.
53 2.1. TECHNOLOGY 27 which is a Javascript API used to communicate with the plugin. It allows external parties to call predefined functions on the X3D plugin. BS Contact does not offer the SAI, but has a proprietary solution instead. X3DOM X3DOM is an open source framework and runtime to integrate X3D natively (meaning; without the need for plugins) in HTML5 by trying to fulfill the HTML 5 declaration for declarative 3d content. 14 The idea is to include X3D elements in the HTML 5 DOM 15 tree, which would enable us to manipulate the 3d scene by modifying DOM elements, without the need for plugins (which are controlled via the SAI). The idea is illustrated in Figure 2.10 [1]. Figure 2.10: Moving from a loose plugin-based Scene-Access-Interface (SAI) to the tightly integrated X3DOM model. WebGL WebGL is a standard (a low-level API) to provide 3d content in webbrowsers, without the need for a plugin. WebGL runs in the HTML5 canvas, which means that it has full access to the DOM (Document Object Model) interfaces. The fact that WebGL it is based on OpenGL has the advantage that OpenGL is cross-platform 16 and cross-browser. All major browser vendors already implement it in their current beta versions of the browser. The disadvantage on the 14 The draft on HTML 5 at the following location: no.html#declarative-3d-scenes specifies the following: 13.2 Declarative 3D scenes Embedding 3D imagery into XHTML documents is the domain of X3D, or technologies based on X3D that are namespace-aware. 15 DOM is an abbreviation for Document Object Model, it is a hierarchical structure for representing and interacting with objects in a HTML page. Interaction with objects in the DOM is typically done via client-side Javascript. 16 cross-platform should not be confused with platform independent. WebGL is cross-platform,
54 28 CHAPTER 2. RELATED WORK - SETTING THE SCENE other hand is that it is tightly coupled to hardware-accelerated graphics which makes portability to other infrastructural platforms (like mobile devices) difficult or sometimes even impossible. O3D O3D is being developed by Google, and is one of the more popular 3DWeb formats. It is an open source Javascript API for creating interactive 3D applications. It started out as a browser plugin, but has been replaced by a javascript library using WebGL. The differences between the two versions are shown in Figure 2.11 Figure 2.11: O3D Software Stack - plugin and future version The previous version shows the O3D core software, the plugin that needed to be downloaded and installed in your browser, the new version shows that there is no longer a need for a dedicated plugin, the rendering will be handled by a Javascript API. indicating that it runs on a variety of platforms, but not necessarily all. X3D, as example, is platform independent which means that it should be possible to implement render engines on any platform.
55 2.1. TECHNOLOGY 29 The reason why Google intialially built a plugin was that they did not expect that Javascript could meet the performance expectations. Recent development in Javascript engines shows that Javascript becomes more and more performant. 17 Collada Collada is an abbreviation for COLLAborative Design Activity and is the intermediate file format for 3d applications. Most 3D render applications (like Blender, 3dStudio), but also tools like Adobe Illustrator, Google Sketchup have facilities to export to this format. Digital Assets in collada files are described in XML format, the files have the extension.dae which is an abbreviation for Digital Asset Exchange, emphasizing the fact that collada provides an intermediate format HTML HyperText Markup Language, also known as HTML is the markup language that is used on the Internet. The language defines ways to structure documents, by using tags to indicate page elements. Browsers are applications that interpret this text and display it according to a well-defined standard. HTML defines a way of describing layout of information, by using well-defined elements suchs as headings, lists, paragraphs and other items. In short; HTML describes the layout of webpages. The current version of HTML is HTML 4.01 CSS, or Cascading StyleSheets is a standard that defines the appearance of the HTML elements in web-pages. 17 Not all JavaScript engines are created equal. Google Chrome has V8, Firefox is working on TraceMonkey, and Apple has SquirrelFish. Conspicuously absent is Microsoft, which has opted to invest more broadly in realistic scenarios when developing Internet Explorerwith the result that one of the most popular browsers has, by most accounts, one of the slowest JavaScript engines. The good news is that O3D includes Google s V8 JavaScript engine, which ensures consistent performance across all browsers. [21] 18 Google Earth standard export format is actually a.kmz file, which is a zip archive containing.dae files
56 30 CHAPTER 2. RELATED WORK - SETTING THE SCENE The current version of CSS is CSS Level 2. XHTML XHTML is a XML based version that extends versions of HTML. Up to HTML 4, HTML was defined as an application of Standard Generalized Markup Language (SGML), a very flexible markup language framework, XHTML is defined as an application of XML. The difference between XHTML and HTML is that XHTML must be a wellformed XML document. The current version of XHTML is XHTML version 1.1 HTML 5 HTML 5 is the proposed successor of HTML and has reached Draft status in March 2010 (which is 8 months behind schedule). The specification should become a W3C standard by the end of the year One of the main goals of HTML 5 is to remove the need for plugins, like Adobe Flash, Microsoft Silverlight or Java FX. Most of those platforms require some sort of Virtual Machine (available as plugin) to run. The power of the use of these frameworks is related to the percentage of install base of the plugins (Javabased frameworks are an exception to this, since Javascript is provided natively is most of the browsers). StatOwl 20 provides the following plugin market share, average percentage in the year The browser penetration of Flash and Java is relatively stable, the penetration of Silverlight is strongly increasing, starting at 21.31% in January 2009, ending at 37.38% in November Work is on the way to provide a native (i.e. no plugins) integration of X3D in 19 Actually, HTML 5 is the proposed successor for the combination of HTML 4.01, XHTML 1.0 and DOM Level 2 HTML 20 overview.php
57 2.2. SCIENTIFIC AND INDUSTRIAL WORKS 31 Plugin Avg 2009 Jan 2009 Nov 2009 Flash 96.52% 97.40% 96.71% Java 80.74% 81.51% 81.68% Silverlight 29.58% 21.31% 37.38% Table 2.1: Browser plugin statistics HTML. [2] [12] 2.2 Scientific and industrial works This section describes related work, either scientific or from an industrial point of view. SecondLife [3] is probably the most famous Virtual World that exists. It allows users, via avatars, to interact, socialize, participate in activities and more. Second Life also offers trade options and virtual property. ExitReality 21 creates an instant virtual place from every web page that is available on the internet. ExitReality is based on the internet itself and not a closed environment like Second Life. Once the plugin is installed, the application allows you to interact and socialize with other users through their avatars and chat boxes. The application turns webpages into virtual rooms, adapting the room to the users interest. This is done for social network sites like Facebook or MySpace but they aim to transform every webpage into a 3-dimensional website. Navigation in these virtual rooms is not easy. In Detroyer et al. [51], they have developed an approach to make VE adaptable to learners. It is based on the concept of virtual reality adaptation state where the virtual world adapts according to the learner. However, it does not used any kind of way to search and navigate easily in the 3-dimensional space. It is primary done in the context of adaptable E-learning applications. H. Mansouri [35] used an ontology, in combination with the VR-WISE approach, to create a search engine. Their work is based on the assumption that, while the virtual world is constructed, semantic data can be added via annotations. This se- 21
58 32 CHAPTER 2. RELATED WORK - SETTING THE SCENE mantic data can afterwards be used for a search engine and navigation in the VW. This approach is limited to static VWs meaning that the VW cannot be adapted. Van Ballegooij & Eliens [52] state that in a web-based 3d virtual environment, users often encounter the problems of being lost-in-cyberspace. Besides the notion of disorientation, users have the problem that it is hard to discover all that a VW has to offer, without spending a lot of time exploring it. They propose navigation by query to overcome these burdens. Navigation by query augments the possibility for users to navigate by allowing the user to query for content. Peng et al. [42] discuss improvements in VR performance in 3D building navigation. They tackle performance problems of large scale VWs by dynamically loading models based on cell segmentation. They investigate route optimizations based on path planning to ease navigation. K.H. Sharkawi et al. [47] discuss the combination of Geographical Information Systems (GIS) with 3d game engines. They explore virtual navigation, using real spatial information (colors and shapes), landmarks and other features in 3D geoinformation, leading to a significantly enhanced navigation system, compared to 2d maps that are typically used in GISs. M. Haringer and S. Beckhaus [27] explore extensions to Virtual Reality systems to manipulate objects of a scene, primarily by applying effects at runtime. They implemented a system which provides a user-interface allowing moderators, authors, or automated systems to modify the scene online using the available effects. J. Ibáñez [29] provides a querying model that allows users to find objects and scenes in virtual environments based on their size as well as their associated metainformation. This model is based on fuzzy logic and is even able to solve queries expressing vagueness. They did not look at the possibilities of applying queries in a VW that changes. Denise Peters and Kai-Florian Richter [43] discuss the problems people have when trying to orient in large-scale environments. They propose to apply concepts and methods of schematization to focus on the relevant information in representing virtual cities. They do so by investigating processes which play a role in forming mental representations of city environments and use a cognitive agent for evaluating different schematization principles applied to a virtual city by simulating wayfinding tasks. Xiaolong Zhang [56] takes a look at navigation in a virtual world using a multi-
59 2.2. SCIENTIFIC AND INDUSTRIAL WORKS 33 scale progressive model. He examines the use of scaling in virtual environment to improve the integration of spatial knowledge and spatial action. Jia et al. [31] implement a dedicated search algorithm that can be used for navigation in large scale 3-dimensional environments. They use a browser based approach, implementing the algorithm in Javascript, so it can be embedded in the scene itself. The search algorithm is executed in the clients browser. Yuhana et al. [55] use inference on buildings to determine relative locations of buildings with regards to others. By using distance-ranges, they determine connectivity, build a graph and use a search-algorithm afterwards to calculate shortest paths. This is a very interesting approach, but it is not useful for natural navigation, since the paths are considered as straight lines, while in real-life this is most often not the case. F. Kleinermann et al. [34] explore navigational issues in Virtual Environments by using semantic information. They discuss the possibility of annotating Virtual Objects, allowing the creation of navigation paths and virtual tour guides. Their annotations are not limited to text but can are multimedia objects. This approach is not useable for our case, since we would like to use external data, that is typically subject to change and additionally, we might want to change appearance based on the external information. Issues that are difficult to annotate.
60 34 CHAPTER 2. RELATED WORK - SETTING THE SCENE
61 Chapter 3 Overview of the approach We have explained in the introduction that the aim of this thesis is to address three challenges. The first challenge is to add meaning to the Virtual Environment, more precisely its Virtual World (VW). We want to relate the internal structure of the VW, the virtual 3-dimensional objects, to human-understandable information. The second challenge is to be able to adapt the VW with new information. We would like to be able to display this information, but we would also like to be able to use it, reason on it and use the result to adapt the VW. The third challenge is to improve navigation inside the VW. This chapter will explain step by step how we have designed an approach that addresses these challenges. We will first introduce the approach and then discuss it. 3.1 Approach As explained in the introduction, we aim at using semantic web technologies to tackle these three challenges. We will first explain how we can add some humanunderstandable meaning to a virtual world by the use of ontologies. This will be the first building block of our approach. From there, we will explain how the approach can be extended to allow, on one hand, to visualize new types of information coming from the web inside the virtual world, and on the other hand how we 35
62 36 CHAPTER 3. OVERVIEW OF THE APPROACH can reason on such information to adapt the virtual world consequently. Finally we will introduce how our approach can be extended to facilitate the navigation inside adaptable virtual worlds Ontology, Virtual World and Semantic Mapping As explained in the previous chapter; Virtual Worlds display (composite) structures, apply effects and more, making the scene look like a part of the world we know. The scene graph tells the 3d engine how to display information, but the world has no idea what it is actually displaying. There are several formats that can be used for 3-dimensional information and although it might be possible to store other information in those files as well, it is not a generic nor elegant way. We need to store this information elsewhere. Having different types of information at different locations implies that we need to have some sort of mapping between the information and its 3-dimensional representation We will start by using the ontology as a way to relate information from the ontology to the virtual world Towards ontology mashups The approach should allow an author to use the information coming from external information, like webservices or RSS feeds, to be visualized inside the virtual world. Since RDF is XML based, we should be able to easily combine multiple sources, using stylesheets. This goes towards the concept of mashups that are used nowadays in web 2.0. Instead of having different information sources, we can combine them and use them as one new information source. Although this seems to be obvious, since RDF is an XML based language, numerous sources mention [54] [11] [7] that it is actually very difficult and frustrating to combine RDF and XSLT. 1 1 We could use XHTML in combination with XSLT to convert web-pages into RDF, but we cannot use XLST in combination with RDF/XML with complete certainty, since RDF is nondeterministic.
63 3.1. APPROACH 37 If we are able to push changes in the external information towards the Virtual World, in a near real-time manner, the virtual world can then be seen as a template where dynamic information can be visualized. Similar techniques already exist in ExitReality and SecondLife where information on 3-dimensional panels changes. Changing display information is one thing, but we want to go one step further by adapting the virtual world. As example, instead of changing the information on panels in a 3-dimensional world, like the sun and clouds that are often seen in weatherforecasts, we could think of adapting the world by changing the sky, adding fog etc. Another example would be where we have a virtual lecture theatre that changes color, based on the number of students that are following a specific course. There are many datasources available, and lots of different dataformats like Yahoo Weather vs. METAR for weather information, Yahoo Stock vs. MetaStock for stock information, etc. Regardless of the format of information, it needs to be translated into a format that we can understand, and the logical choice for the resulting format is RDF. This implies that for each resource, we potentially need a dedicated translator. These are typically programmed, thus adding new resources means that code needs to be adapted. (unless translators can be reused, of course) and this also applies to changed dataformats. These translators are small stand-alone programs, that take external information, and translate it to RDF. So, we have one or more external information sources. They need to be translated, so we need to write a dedicated application for this. This information needs to be added to the Virtual World, so we need to adapt the VW to handle this new information, if no means exist yet. In between, we need to map the information with the virtual world, which we call semantic mapping. Gregory Sherman [49] researched the use of XSLT and large XML databases and concludes that, since XSLT has a tendency towards quadratic complexity, it should only be used on small- to medium-sized XML database.
64 38 CHAPTER 3. OVERVIEW OF THE APPROACH Navigation The third aim of the thesis is to facilitate navigation inside an adaptable virtual environment. Before explaining our approach of navigation, we will introduce some terms and background information on navigation in 3d worlds. Navigational awareness is defined as having complete navigational knowledge of an environment. [46] Glenna A. Satalich defines two distinct types of navigational knowledge: Procedural knowledge, or route knowledge. This type of navigational knowledge is ego-referenced and is gained by exploring the area. This type of knowledge is characterized by the fact that the user can go from one point to another, but has no knowledge of alternate routes. Survey knowledge is world referenced, typically attained by multiple explorations of the environment. This is similar to having a mental representation of a physical map, also called a cognitive map. The characteristics of this type of knowledge is that distances and landmarks are known and routes can be inferred, even if they have not been travelled before. Applying navigational knowledge to Virtual Worlds gives us the definition of wayfinding. Wayfinding is a dynamic process of using our spatial ability and navigational awareness of an environment to navigate in a Virtual World. The problem with wayfinding in Virtual Worlds is that, from a human perspective, it is very difficult. [22] [53] [43]. It is very easy to lose orientation, to get lost-incyberspace [52] [19] [32]. One of the ways to overcome this feeling is to use a 2-dimensional map next to the 3-dimensional world. This map would typically be placed in a Head-Up Display, the users location is indicated on both maps. This is an improvement, although this often leads to another issue called the alignment problem [36] 2. Another approach can be that we do not let the user navigate, but we have let the system guide us. This approach is based on Navigation by query [52]. 2 A map is called aligned when the upward direction of the map corresponds to the current direction of gaze in real space.
65 3.1. APPROACH 39 Our approach allows two types of navigation. The first type of navigation is done by querying and jumping to a place inside the virtual world. The query is based on SPARQL, we can build rich queries so we can use the full power of ontologies to get a result, The other way of navigation is by following a tourguide where a path manager can build a path according to the query. The next chapter will explain this in more details Resulting overview Putting these three challenges together, we get the following overview: Figure 3.1: Application design - high level overview Figure 5.1 shows a big ontology icon, with the text main ontology written next to it. This is our ontology, the main-ontology containing the principal information
66 40 CHAPTER 3. OVERVIEW OF THE APPROACH we want to display. The image also shows external information, with a small execution icon and an ontology icon next to it. With this, we would like to indicate that, usually due to legacy reasons, there are many different information types available on the Web 3, there is no single way of providing information. to be able to use this information, we must translate it to a common form we know, we use a small application that transforms/translates the information to RDF. The image also shows that we potentially need a translator for each different type of information resource we would like to use. Even a bit stronger, we can say that is unlikely that we do not need a dedicated translator for each type of information. When questioning what common format we should translate to, the answer is obvious. We, as humans, know what information we are retrieving, we know what it means, so we can translate this into RDF. We supply the data, with an additional layer of data on top of it, the meta-data telling us what information we are actually processing. RDF Data Bus Tim Berners-Lee mentions the The RDF Data Bus [16] [13], making legacy information available in RDF format, providing one uniform way of accessing it. 3.2 Why use semantic web technologies? When displaying information, it seems logical to think in terms of relational databases. They have been forming the backend of many applications for many years. We have chosen a different approach however, we base our approach on ontologies. Ontologies are meant to facilitate either human-to-human or machine-to-machine communication, but there are many advantages, that can basically be grouped in 3 A good example of representation differences is how weather information is provided. Dating from 1968, METAR is a raw format, publicly available, which is mainly used by pilots to retrieve weather information around airports. Yahoo weather provides weather information in the form of webservices. The two types of information are clearly incompatible.
67 3.2. WHY USE SEMANTIC WEB TECHNOLOGIES? 41 Figure 3.2: Tim Berners-Lee - The RDF Data Bus three areas [30] of which the most important ones are [41]: Communication between people. An unambiguous but informal ontology is sufficient Interoperability among computer systems. The machine-to-machine communication, the ontology is used as an interchange format. System engineering, in particular: Reusability, ontologies can be used and reused as a kind of component between software systems. Search, using inference we can use the metadata of the ontology Reliability, since ontologies are formal by nature, we can check for consistencies, up to some degree, leading to more reliable systems. Specification, ontologies are expressed using the terminology of the domain and thus facilitate the process of identifying requirements.
68 42 CHAPTER 3. OVERVIEW OF THE APPROACH 3.3 Discussion Recently, Tim O Reilly reflected on what the web was and where it is going now. He states the following [40] Web 3.0. Is it the semantic web? The sentient web? Is it the social web? The mobile web? Is it some form of virtual reality? It is all of those, and more. He argues that the Web has become the world and that everyone participating in this world casts a digital shadow containing a wealth of information, which he gave the name Collective Intelligence. This thesis discusses a combination of emerging web technologies; the semantic web combined with a 3-dimensional representation. We will create a Virtual World based on information coming from an ontology. We add meaning to the virtual objects we display in the VW. We will adapt the VW with external information, we show that our VW is able to deal with changing external information, and finally we will use the ontology to overcome navigational problems, we let the ontology guide us to our destination.
69 Chapter 4 PathManager Like discussed previously, it is difficult to navigate in 3d worlds. A lot of research has been put into improving navigation, avoiding getting lost in cyberspace. Using 2d maps in combination with 3d maps, typically via the use of Heads- Up-Displays (HUDs), are also often applied, although this leads to misalignment problems. 4.1 Semantic mapping extended We already use the ontology to query for the virtual objects that build up our initial Virtual World, this has been shown in the previous chapter and we use semantic mappings to relate the virtual object to their visual representation. We now extend this mapping by applying point-of-interest (POIs) to each virtual object. For each Virtual Object, we had already retrieved its 3-dimensional representation, we use the same method to retrieve a viewpoint for this object. If, in real life, we go from one place to another, we follow a certain path. Since paths are seldom straight, we cannot use a starting point and an ending point alone, we need intermediate points that indicate angles, places in real life where the direction changes. POIs with a certain orientation are also known as viewpoints 1. 1 It is important to understand that there is a differerence between Points-of-interests and viewpoints. A Point-of-interest is simply a location, it has coordinates. A viewpoint also has orientation. A viewpoint in 3-dimensional worlds is also called a camera. In 3-dimensional worlds, we use viewpoints. Therefor the semantic mapping provides a binding 43
70 44 CHAPTER 4. PATHMANAGER These POIs will be retrieved from the ontology and we apply the semantic mapping, in the same way as we did for the Vrtual Objects, only this time we map the Virtual Objects to viewpoints. After having re-applied the semantic mapping, we have an in-memory map of all virtual objects, and we have attached viewpoints to them. We want to connect Virtual Objects so we can derive paths between them. We can connect Virtual Objects by indicating their relative positions, for instance Object A is above Object B. We could also define a less restrictive connection pattern, indicating that virtual object A is connected to virtual object B. Since the ontology provides us with the virtual objects to display, it must also provide us with the connectivity between the virtual objects. In other words; the ontology must provide us with the cognitive map we will use to navigate. Our path manager is now aware of the different virtual objects and their corresponding viewpoints and the connectivity between the objects. We have constructed a graph. 4.2 Navigation by query As we have explained in our previous chapter, our primary means of navigation will not be user-initiated, using mouse-gestures or arrow controls to go forward, turn etc. We will use a query-based navigation. We let the user ask a question to the system, the system will guide the user to the destination, based on the result of the query. For our cognitive map, the representation of the map in the Virtual World, we will use a graph, connecting different landmarks. Intermediate landmarks will indicate places where the direction of the path changes. The map, represented by a graph, can now be used to guide us from our starting point to our destination. Depending on the type of graph (directed, weighted, etc), different shortest-path search-algorithms, like breadth-first or Dijkstra, can be used. Based on the destination, mapped to a viewpoint in the graph, the pathmanager will return the path to follow when travelling from A to B. to viewpoints and not POIs.
71 4.3. CONSTRUCTING THE NAVIGATION PATHS 45 So, instead of letting the user navigate through the world itself, we propose a system in which a user queries some sort of engine, and the result is used to move the navigator from viewpoint to viewpoint, imitating a natural path as if we were walking in the Virtual World. 4.3 Constructing the navigation paths Once we have a cognitive map, we can use a search algorithm to get from a certain starting point to our destination. Since we use viewpoints, each POI has an orientation, we can implement a navigation path that looks natural. The question is how we get the initial viewpoints, that make up our path, to start with. Kleinermann et al. [34] use semantic annotations by letting users position a POI on an object or around it. They use two different ways for positioning POIs: grid positioning, where the designer can position POIs according to predefined positions for the POI, and freehand positioning, where the designer can position an object and add a viewpoint anywhere on (or inside) an object. Their definition of navigation paths follow the same approach, linking different landmarks, where a landmark is a POI with an orientation. All their annotations, landmarks etc. are uploaded into a reasoner. Our approach is a bit different, we used a map on which we placed navigation paths. The intersections and end-points can be used as landmarks. As example, we will look ahead to the situation we also use in our proof of concept, later on in this document. We take a map of the campus of the Vrije Universiteit Brussel (VUB) as indicated in the following image: We overlay the map with a graph, each end-point and each intersection point becomes a point of interested. We have numbered the points for reference. This implies that we use the ontology to store connections between Virtual Objects, not their relative locations! This graph can be easily stored in the ontology. All we need to say is that a point A is connected to point B. For bidirectional graphs we also encode the link between point B and point A. The graph is ready, we know the different POIs and the connections between them. How do we relate these POIs to viewpoints, points with an orientation, useable in
72 46 CHAPTER 4. PATHMANAGER Figure 4.1: The map represented as graph Virtual Environments? The POIs that are stored in the ontology are names of points and their connectivity. These POIs have no orientation. For example; when looking at the intersection in the lower-left corner of the image, we see that POI number 2 is connected to POI 1, 3, 4 and 6. This implies that for POI number 2, we have to define 4 viewpoints, since viewpoints have an orientation as well. We defined a viewpoint for the path from viewpoint 2 to viewpoint 1, from viewpoint 2 to viewpoint 3, from viewpoint 2 to viewpoint 4 and from viewpoint 2 to viewpoint 6. Using Google Sketchup, we navigated to the POIs that we indicated in the previous step. We orientate ourselves in such a way that our current camera/viewpoint is looking to the next viewpoint. This position, including orientation, is saved, so we can convert them to useable viewpoints afterwards, in the format we will use to construct our Virtual Environment (X3D, O3D,...).
73 Chapter 5 Approach implementation - a case study This chapter describes the application built on the ideas that we highlighted in the previous chapters. As described earlier, our application shows a paraverse, a real-life kind of Virtual World (VW), that is driven by an ontology. With this we mean that the information we show in the VW comes from the ontology and additionally, we use the ontology to help us with navigation. The application we built uses the campus of the Vrije Universiteit Brussel (VUB) as its paraverse. The ontology therefore contains notions of the buildings on the campus and any additional information that can help us for our query-based navigation. For example; we use names, telephone numbers and roomnumbers of employees of the university, building a minimal telephone-book. 5.1 The technical building blocks When implementing a proof-of-concept (POC), some technological choices need to be made. These choices are related to 3d engines, application environments, related utilities etc. Which one do we use and why? To build our POC, we will use the following technologies. Most of them have 47
74 48 CHAPTER 5. APPROACH IMPLEMENTATION - A CASE STUDY already been explained in the previous chapters, in this section we will provide a justification why we chose these specific technologies over alternatives. X3D The choice for X3D is made because it is an open standard, supported by W3C. Additionally, it is proposed as part of the new version of HTML (HTML5), which means that, if implemented by browser vendors, no additional plugins will be needed. O3D, Google s API, is open-source, so it would have been a very good alternative. O3D is much tighter coupled to the platform on which is runs (WebGL), where X3D is platform independent. The first beta versions of all major browsers currently support HTML5. Adobe Flex Arbitrary choice, but again the choice is driven by the fact that the platform is open source and it is browser independent. Flex has an object-oriented language embedded, called ActionScript and additionally has a lot of strong widgets, making web-development easy. Silverlight (Microsoft) would have been an alternative platform. It has C# as backing language (in fact, any of the languages running on the Microsoft runtime environment can be used), which is an object-oriented language. It also has a strong widget set. The platform is closed and not free, making this a less obvious choice. jquery It is a Javascript library and framework which is very strong in handling asynchronous requests with the webserver and DOM manipulation. This is also an arbitrary choice, any framework could have been used. OWL OWL stands for Web Ontology Language and is a family of languages used to write ontologies. SPARQL The SPARQL Protocol and RDF Query Language (SPARQL) is a query language and protocol for RDF. Joseki Joseki is an open-source HTTP engine, that supports SPARQL queries. It is based on Jena, a Java framework for building semantic web applications. This is an aribitrary choice, it was the first one we found and it serves our purposes.
75 5.1. THE TECHNICAL BUILDING BLOCKS 49 Let s map these technologies to the design we have already highlighted in our approach: Figure 5.1: Application design - high level overview When we look at the image, we can split it into three parts; the ontologies (webresources) that are accessible via http and connected via the bus, our server-side application containing the semantic mapping and the path-manager and the client side, containing the browser for the virtual environment. We will now position the different technologies in these three blocks: Ontology related Joseki - The HTTP engine that responds to SPARQL queries. We can access this engine from our application (server side) and query it via SPARQL. Joseki can be seen as the implementation of the bus in the image.
76 50 CHAPTER 5. APPROACH IMPLEMENTATION - A CASE STUDY OWL - The ontologies themselves are written in OWL. OWL is one of the languages that can be used to write ontologies, OWL, on its turn, is based on RDF. Server side application Adobe Flex - The server side of the application, including the Semantic Mapping and the Path Manager are written using Adobe Flex, a Flash based application framework that uses an Object Oriented language called ActionScript. Client side application X3D - The language used to build up our Virtual Environments. jquery - a javascript engine that is very good in handling asynchrounous callbacks. Adobe Flex - The controls of the client side of the application are also written in Adobe Flex. In fact, a Flex application often provides a server- and a client-side part, where the client side is built with widgets that interact with the server side The ontology The ontology we have used for the proof of concept is based on an existing ontology by Patrick Murray-John [37] It describes a Campus consisting of CampusPlaces, Courses, Organizational units, Persons extending foaf:persons 1. We have extended this ontology by adding the following objects: Staff, extending the Person class from our base ontology. AcademicStaff, extending Staff Professor, extending AcademicStaff Assistent, extending AcademicStaff 1 FOAF stands for Friend of a Friend and is on ontology describing persons, activities and relationships.
77 5.1. THE TECHNICAL BUILDING BLOCKS 51 Researcher, extending Assistent AdministrativeAndTechnicalStaff, extending Staff Building, extending the CampusPlace class from our base ontology Floor, extending the CampusPlace class from our base ontology Room, extending the CampusPlace class from our base ontology In our ontology, each Staff-member has a room. Rooms are logically assigned to Floors, floors on their turn to buildings. It is clear that this is a very limited setup, but the goal was not to design a university ontology. We wanted to have information available which we could relate to campusplaces, which are mapped to points-of-interest in our application. We use the ontology to query for a person s name, for instance, and as a result we get the building in which this person has a room. The ontology stores triples, that are basically relationships between two nodes. We use the Web Ontology Language (OWL) to build our ontology. Web Ontology Language As a small example; the definition of a Professor Class in OWL: <owl:class rdf: <rdfs:subclassof rdf: <owl:disjointwith rdf: </owl:class> We define that a professor is part of academic staff, and a professor is not a researcher. When creating an object (an instantiation of a class), we might have something like this: <Professor rdf: <hasfirstname>olga</hasfirstname> <hassurname>de Troyer</hasSurName> <hastelephonenumber> </hastelephonenumber> <hasfaxnumber> </hasfaxnumber>
78 52 CHAPTER 5. APPROACH IMPLEMENTATION - A CASE STUDY <worksforunit rdf: <hasroom rdf: </Professor> The relationships between nodes in the example above are, for example has- Room that connects a professor with a room. The room refers to another ontology, that is defined in the header of this specific ontology. SPARQL We use OWL as language to define our ontology, we also need a way to retrieve the information from it. SPARQL is a standardized RDF query language, we use it to query our ontology. Based on the example previously given, we also provide a small SPARQL query, an example to query for a roomnumber. PREFIX vub: <> PREFIX rdfs: <> SELECT?professor?roomNumber FROM <Data/staff.owl> WHERE {?professor a vub:professor.?professor vub:hassurname "De Troyer".?professor vub:hasroom?room.?room vub:hasnumber?roomnumber } Other university ontologies Some other projects (other than the base of our ontology) have also explored the possibilities to model a campus. Benjamin Nowack proposes Semantic Campus - a FOAF extension 2 that describes campus-related resources such as universities, departments, lecturers and students. It has no notion of physical, geographical locations. Jeff Heflin defined a university ontology 3, but has a strong focus on documents that are published and has no location information. 2 pp/semantic campus/ 3
79 5.1. THE TECHNICAL BUILDING BLOCKS 53 While the concept has not been finalized yet, Patrick Murray-John is thinking [38] about a giant edu-graph that combines ideas, subjects, people and the resources used in teaching and learning. This project is interesting, since it combines several ontologies; FOAF (Friend Of A Friend), the Bibliographic Ontology, GeoNames, SIOC (Semantically Interlinked Online Communities) to name a few External information We have a ontology that defines buildings, employees of the university etc. But we also mentioned that we use external information to enrich our virtual world. For our proof of concept, we used METAR information, weather information often used by pilots. This is an old format, dating from 1968, obviously not available in RDF. We wrote a small application that parsed the METAR string and returned it in RDF. The input string and the resulting RDF information are added in the appendix The Virtual World - The campus in 3D Sye Nam Heirbaut has developed a 3D version of the campus using Google Sketchup, a tool that can be used for 3d modeling. His work is used as base for our 3d world. This work was done as a student job, unrelated to any university project. The result of his work is used in this project however, since it provided a complete model of the campus, and we could export the models to VRML. Using VRML and Vivaty Studion we could translate it to X3D. This section describes the steps involved in creating a Virtual World in X3D based on the Google Sketchup models. We will start by a small explanation on X3D. extensible 3D We have chosen extensible 3D (X3D) for the fact that it is an open standard, supported by W3C and it is designed in such a way that it is platform independant. Scenes are not tied to underlying hardware specifications, screen resolutions etc. X3D files are read and parsed by X3D browsers. An X3D Browser is responsible for the interpretation, execution and presentation of the X3D Scene Graph. It is
80 54 CHAPTER 5. APPROACH IMPLEMENTATION - A CASE STUDY the browser that understand the contents and know how to render its information, resulting in the display of virtual objects. We call the representation of an X3D Scene graph a Virtual World. Browsers can be desktop application or plugins in web-browsers. Before going into more detail, we will provide a small example of an X3D file, based on an example by Don Brutzman [19]: <?xml version="1.0" encoding="utf-8"?> <!DOCTYPE X3D PUBLIC "" "/"> <X3D> <head> <meta name= filename content= geometryexample.x3d /> <meta name= author content= Don Brutzman /> <meta name= created content= 8 July 2000 /> <meta name= revised content= 5 January 2001 /> <meta name= url content= examples/course/geometryexample.x3d /> <meta name= description content= User-modifiable example to examine the role of the geometry tag. See what nodes can be replaced: geometry (no) and Cylinder (yes). /> <meta name= generator content= X3D-Edit, translation/readme.x3d-edit.html /> </head> <Scene> <Shape> <Appearance> <Material diffusecolor= /> </Appearance> <Cylinder/> </Shape> </Scene> </X3D> This example demonstrates a simple cylinder. While in this specific case, the metadata may seem heavy, in larger files, this will be less an issue. Each X3D application [4]: 1. implicitly establishes a world coordinate space for all objects defined, as well as all objects included by the application; 2. explicitly defines and composes a set of 3D and multimedia objects; 3. can specify hyperlinks to other files and applications; 4. can define programmatic or data-driven object behaviours;
81 5.1. THE TECHNICAL BUILDING BLOCKS can connect to external modules or applications via programming and scripting languages; This leads to the following overview as described in Image 5.2: Figure 5.2: X3D System Architecture The image shows that a browser should be able to render X3D, but also VRML, X3Ds predecessor and binary X3D. The Scene Access Interface (SAI) is used for event passing with external applications. Scene Access Interface The Scene Access Interface (SAI) is a application programmer interface (API) that defines runtime access to the Scene. The SAI is used to create nodes, modify nodes, send events to nodes etc. We use the library created by Ajax3D [5] for our application. This library already defines Javascript methods that use the SAI to create an initial scene and add
82 56 CHAPTER 5. APPROACH IMPLEMENTATION - A CASE STUDY virtual objects. Google Sketchup We received an initial file of the entire campus in Google Sketchup format. The file, as we received it, was not useable for the web, due to its size and format, so we had to make it web-friendly. Starting from the initial file, we started by isolating all buildings. With this we mean that for each building in the campus, we removed all unrelated information. In other words, we started from the initial campus and removed all buildings, except the one we wanted to isolate, on a per-building base. The result is that, instead of one big file containing the entire campus with all it s buildings, we now have created multiple files, one per building. The buildings themselves still have their geo-location which was available in the initial Skecthup file. The next step was to create a simplified version of the building. Removing as much detail as possible, reducing the building to basic building blocks. This was typically done by taking the roof of the building, removing the entire structure underneath and afterwards extend the roof until the ground. Afterwards, we applied a texture to the buildings. The texture itself was taken from an image created with the Google Earth plugin called V-Ray for Sketchup 4. The final step in Google Sketchup was to export the buildings to VRML format. 5 Vivaty Studio Vivaty Studio, formerly known as Flux Studio, is a tool that allows the user to manipulate 3d scenes. The application allows, amongst others, to import VRML scenes and to export them to either uncompressed X3D or compressed X3D. We actually used the tool for both. We exported to uncompressed X3D to get the viewpoint we previously set in Google Sketchup 6, and we also used it to get a 4 We also directly applied a default viewpoint to the scene, for later use. 5 This is a feature that is only available in the Google Sketchup PRO version. 6 Vivaty somehow loses the viewpoints parameters, these need to be manually set
83 5.1. THE TECHNICAL BUILDING BLOCKS 57 compressed version of the scene. 7 Binary compression of the X3D files, greatly enhances downloadspeed. The downside is that the DOM structure of the binary file is no longer accessible which implies that the scene is no longer modifiable from the SAI afterwards. The solution to this problem is to compress the visual part of the scene after having seperated it from all non-visible nodes (like viewpoints, scripts etc). The nonvisual nodes are placed in a the text-version which references the binary file. An example can be found in the appendix. The reason for simplifying and compressing becomes clear when we look at the following table which highlights the file-size differences between the different possibilities (file-size of resulting.x3d files): Sketchup VRML Uncompressed X3D Compressed X3D Unmodified 13.6 Mb 17.3 Mb - - Simplified 10.5 Mb 119 Kb 227 Kb 10 Kb With textures 10.6 Mb 131 Kb 242 Kb 20 Kb Table 5.1: Compression differences between VRML, X3D and X3D compressed In case of the unmodified version, Vivaty studio is unable to export it as either form of X3D (uncompressed or compressed). The result of the simplified version and the version with an applied texture can be found in the appendix. We could improve the display result by using more detailed base structures etc (less abstraction), but this would increase the number of polygons that define the buidling. We could also improve the quality and the number of textures, which are simple images applied on a surface, but this would increase the file-size and thus also increase the render time. We chose a very simple texture to reduce the filesize of the image which means that we load the entire scene faster. The trade-off is between appearance and speed. 7 In the compressed version, we removed the viewpoint so we did not have any duplicate viewpoints.
84 58 CHAPTER 5. APPROACH IMPLEMENTATION - A CASE STUDY Semantic mapping We explained in our chapter on the approach, that we use a semantic mapping to relate ontologies to virtual environments. In our approach, we use OWL to define our ontology, we use SPARQL to retrieve information from it, and on the other side we use X3D for 3-dimensional representation of these objects. How does the semantic mapping connect these technologies? When the application starts up, it will start by getting information on the virtual objects that we need to display. This information comes from our ontology, but this information contains no visualisation information, we receive the buildings we would like to display in our paraverse. To get the display information, we map the objects to their 3-dimensional representation. The mapping uses an initial query to ask the ontology for the virtual objects to display. We also need a mapping file, that maps each object to an X3D file. When we have the objects we would like to display and we have their 3-dimensional representation in the form of an X3D file, we perform a function call on the Scene Access Interface, so we can add the object to our Virtual World. It is important to realise that we add multiple objects. For each object we received from our ontology, we perform a mapping. A mapping might look like this: <SceneMapping> <Mapping> <URI></URI> <SceneURL></SceneURL> </Mapping>... </SceneMapping> We have mapped our information from our ontology, building M indicated by the URI to a 3-dimensional representation in X3D: Using the Ajax3D implementation [5] we already have a function available to add the specific X3D information to the existing virtual world.
85 5.1. THE TECHNICAL BUILDING BLOCKS 59 External information We also use the semantic mapping to map external information. The procedure is the same, except for the fact that we can no longer use the Javascript call from Ajax3D that was used to add virtual objects. External information might alter the virtual world, instead of adding a virtual object. This implies that for external information, we do not map to an X3D file that represents the information, we map to a Javascript function call instead. It will be the implementation of the Javascript function that handles the external information. A small example will make things clear: Imagine that we would like to add weather information to our virtual world (this is actually what we do in our proof of concept). We do not add any objects when weather changes, instead we change the sky according to the weather conditions. We might also want to activate some fog in our virtual world. In this case, we will have a javascript function that receives weather information and modifies the scene accordingly PathManager The PathManager is the component that guides the user to its destination by providing the path from its current location to the destination, using a search algorithm over a graph. Figure 5.3: The map represented as graph We started by creating a graph that is an abstraction of the walking paths a person
86 60 CHAPTER 5. APPROACH IMPLEMENTATION - A CASE STUDY can take on the campus. The graph is created by taking a map of the campus and drawing routes on top of it. The resulting graph is shown in Image For the sake of simplicity, we use an unweighted bidirectional graph, so we can easily apply a Breadth-First-Search (BFS) algorithm to dynamically find a shortest path between two nodes 9 We retrieve the connections between the different points from the ontology. They are mapped to viewpoints, points with a certain location. Since these viewpoints are non-visible elements of a SceneGraph, we can use the same semantic mapping as we used to add virtual objects to the scene. We also use the same Javascript function call, using the Ajax3D API. At this point, we have a fully initialized application. The initial virtual objects are shown, it is enriched using external information, we have a representation of the Virtual World in the form of an in-memory graph which is handled by the PathManager and we have the query available to respond to any question coming from the user. The process of guiding the user are the following: 1. The user submits a query. The application translates this to a SPARQL query (static mapping) 2. Whether we ask information about persons, rooms, courses, the result will always be a building. This result is passed on to the PathManager. The PathManager knows the current location and knows the destination, which is related to the result of the query. It will return the starting viewpoint, a list of intermediate viewpoints and the destination viewpoint based on a graph search algorithm. 3. For each step in the list of viewpoints, the renderengine receives a functioncall to change the current viewpoint. We have added a small delay between each function invocation to give the navigation a more natural look and feel. 8 The actual implementation only implements a part of this graph, due to the fact that Vivaty Player only supports a limited number (65) of viewpoints 9 A more realistic approach would be to use a weighted graph, with the lengths of the paths as the weight of the vertices. This is possible of course, but then we would have to use a more complicated search-pattern, like Dijkstra s Shortest Path algorithm for example. This approach is out of scope for this POC, since it does not add any additional value to the challenges we address.
87 5.2. THE RESULT The User Interface We have chosen for a minimalistic search interface. The user can select the viewmethod (either jump or guided tour) and provide text, as search-input. Using ConcurTaskTrees (CTTs) 10, we come to the following overview: Figure 5.4: Concurrent Task Tree of the user interface The overview is very limited, it tells us that when we provide a request, we need to select a viewmethod and provide a query, this input can be provided in any order (indicated by the operator). Afterwards the first task is terminated and the second task performs, based on information coming from the first task (indicated by the []>> operator). 5.2 The result We have created two versions of the application, a guided tour and a version in which we use a simple search-engine, which queries the ontology providing us with query based navigation. 10 [6].
88 62 CHAPTER 5. APPROACH IMPLEMENTATION - A CASE STUDY In both cases, the user-interface is very similar. Figure 5.5: Screenshot of the application in action Image 5.5 shows the version of the application in which we can query the ontology. The application shows that we have searched for building M, and where the result gets mapped to a building. The building gets highlighted, which is shown in both the 2d map and the 3d world. There are two remarks concerning the image: The purpose was to highlight the building by changing its color. However, Vivaty doesn t allow to query the scenegraphs of binary scenes, so we placed a transparent box around the building as workaround. The 2d map was intended to be embedded as HUD, but Vivaty player is not stable enough to allow this, the map is now an embedded image in the webpage and is not a part of the scene Guided tour - Esplanada The guided tour is the application we started with. It follows the travel control way of navigation [56], or the Navigation by Query [52], but in a very simplistic
89 5.2. THE RESULT 63 way. Navigation by query implies that the user does not navigate himself, it is the system that guides the user through the Virtual World. The esplanada website 11. shows the visitor the location of interesting points/buildings on the campus by starting from a bird-eye view, zooming onto the requested location and zooming back out again. These movies are created in Google Sketchup and recorded as Flash movies. In fact, these movies are based on the same Google Sketchup model as we used for our POC. The 3d-world of our application mimics the behavior of the Esplanada introduction movies on the Esplanada website of the VUB. The Esplanada demonstration was easy, simply proving the concept that we could navigate in a Virtual World. It contained hard-coded POIs (Point Of Interests), that are available for selection in a drop-down box, as well as hard-coded paths from a starting point, to the destination and back again. The fact that hardcoded POIs are used implies that we make no use of the ontology yet. The viewpoints that were used for the movies were already available in the Google Sketchup project, so we started by extracting these viewpoints in the form of VRML and translated them to X3D using Vivaty Studio. The Points Of Interests (POIs) are available in a dropdown list in the application. Of course, they are indicated by a name that is understandable for the user, like Restaurant, Building C etc. A dedicated PathManager was written to return the viewpoints, as a path to follow when navigating from a starting point, typically the current location of the user, to a destination, the result of the query initiated by the dropdown box Query based navigation A far more interesting application of the design is the use of the ontology to provide us with the destination viewpoint. Instead of supplying a dropdown box with predefined locations, we retrieve the locations from the ontology. The ontology is used to build a conceptual map of the campus by connecting all viewpoints as a graph. This graph is managed by our PathManager. The dropdown-box is replaced by a search option, which will be used to query the ontology. The result of the query will have some sort of geographical knowledge 11
90 64 CHAPTER 5. APPROACH IMPLEMENTATION - A CASE STUDY which will be used to query the PathManager for a path from the current position to the destination. The destination is a viewpoint, which is related to the result of the query. The graph manager returns the route to follow based on its internal graph. Moving between viewpoints now becomes a fluent movement, simulating the path a user will take when walking from one point to another in a real-life environment Considered approaches We examined some additional approaches for the POC, but had to abandon them due to technical reasons. Here is a small list that might be useful for future reference: The use of Level Of Detail (LOD). The idea was to change the image by one with a higher level of detail, when closer to the object. This way, we could only load the low-level image to start with, when approaching it, load the high-resolution version, so we could increase the level of details when zooming in. This does not work, since LOD requires that both versions are preloaded. Use of Javascript. In general we can say that Vivaty player and Javascript don t go well together. The player crashes often, reacts differently depending on the version. We therefor completely avoided the use of scripts within the X3D scene. The use of O3D. Google provides a tool to easily transfer.x3d files to.o3d files, in other words; to transfer x3d objects to o3d. This was very easy to implement, the result looked very good. The approach proved that the ontology did not need to change when changing the render engine. However, changing the viewpoints was a bit more challenging and this attempt was abandoned so we could focus on our main project Existing similar paraverses A lot of universities have a Virtual University (VU) in Second Life, these VUs explore the aspects of online learning, collaboration etc. Besides that, a lot of universities have a pure visual 3d representation in Google Earth. These fall outside
91 5.2. THE RESULT 65 the scope of this research, since they are based on virtual campusses for e-learning purposes. There are, however some projects that explore similar possibilities: F.E. Dozzi created for his/her master thesis a proof of concept (2000) to examine 3-d Virtual Worlds on the Internet using VRML, modeling the campus of the University of Amsterdam 12 Joris Gillis started an (unfinished) project at the University of Leuven where he uses Google Sketchup to create a 3d map of the campus and publish it on the web. 13 None of these two projects used ontologies. 12 eliens/archive/scripties/fedozzi/ 13 s /projects/campus3d
92 66 CHAPTER 5. APPROACH IMPLEMENTATION - A CASE STUDY
93 Chapter 6 Overall limitations Using information from the ontology in combination with semantic mappings gives us a relative flexible approach, but there are some limitations in our approach. The limitations fall under two categories; the semantic mapping being static and the pathmanager that uses a predefined graph. 6.1 Semantic mapping The semantic mapping bridges the information from the ontology and the virtual environment. The main limitations of our approach lie in the fact that the mappings are static. Users cannot influence the mappings, this needs to be done by the maintainer of the application Adding new objects Adding new virtual objects does not require code changes. The ontology needs to change, since the query now also must retrieve the new object. If the object is provided in the same way as the other virtual objects, the query (for instance: getallbuildings) does not need to change. However, the semantic mapping itself is static; adding new virtual objects requires that the mapping needs to be adapted. If no mapping is available, virtual objects 67
94 68 CHAPTER 6. OVERALL LIMITATIONS will not be displayed. This as such is not a real limitation, the limitation lies in the fact that this mapping is implemented in text-files that reside on the server. The mappings are only editable by the application administrator, not by users Changing the queries This mapping uses SPARQL to query the ontology, these queries are, just like the mappings, encoded in text-files. There are two issues with this approach. The first issue is that the interfaces of these queries are fixed. If we would like to return more information from the query, or change parameter names, we need to adapt the query, but also the application code to deal with those changes. Again, the queries are on the server, the queries cannot be modified by the user of the system. It is up to the maintainer of the application to decide which information is visible and in what way. 6.2 PathManager The pathmanager queries the ontology for POIs and the connections between them. These connections are currently coded in the ontology, not derived via inference. If routes change, the ontology needs to be adapted. Since we use semantic mappings between the POIs and viewpoints in our virtual environment, we also need to adapt the mapping and the viewpoints. Again, these mappings are done at the server.
95 Chapter 7 Conclusion Web 3.0 is on our doorstep, and although the defintions of what web 3.0 exactly is differ, most defintions agree that the semantic web is part of it, and possible also web3d. In this thesis, we have used the semantic web (ontologies) to enrich the 3-dimensional web. Enrichment from data point-of-view, but also from usability point-of-view. We have done this by addressing three challenges: Initiating a Virtual World using ontologies Enriching the Virtual World with near real-time external data Using the ontology to help us with navigation We have shown that we can use an ontology as base for Virtual Environments. We use semantic mappings to map our information of virtual objects to their 3- dimensional representation. Doing this, the virtual objects we show are becoming must more than a collection of polygons, textures, etc. that make up a Scene- Graph. The virtual objects in a virtual environment can be given a meaning, becoming objects of which we know what they are. Additionally, we have shown that it is possible to enrich the virtual environment with external data. We can combine multiple external resources into a single new resources, which follows the idea of mashups and all this be done in a near real-time way. To be able to use different data-formats, we have used dedicated 69
96 70 CHAPTER 7. CONCLUSION translators that parse the data and provide it in RDF format, so we can query it in the same way we query the ontology. Finally, we have used the ontology to help us with navigational issues. It is easy to get lost in virtual worlds, so instead of letting the user navigate, we let our application help the user. Our approach is based on Navigation by query. We have introduced a path manager that, in combination with the semantic mapping, constructs a cognitive map of the virtual world we are using. This map is internally represented as a graph, and the path manager can now construct our navigation path using a search-algorithm on this graph. It uses the current location as starting point and the destination will be tied to the result of a query to the ontology. This path is visualized as a sequence of viewpoints which we follow, giving the user the impression that he is walking in the virtual world. 7.1 Future work Currently, our ontology has no knowledge of locations. We query the ontology for buildings on the campus, or in a more specific approach, we query the ontology for the virtual objects to display. The result is a list of buildings/virtual objects and the semantic mapping maps these to their visual representation. It is the visual representation that has geospatial knowledge. A future improvement might be to extend the ontology with geospatial knowledge using GeoRSS encoding. The W3C Geospatial Vocabulary 1 defines a basic ontology and OWL vocabulary for representation of geospatial properties including points, lines, polygons, boxes and more. We could also think of adding referential knowledge to the ontology, for example: building A is next to building B. This knowledge can be used, via reasoning, to create the graph that is used by our pathmanager for tourguides. Another thing we can imagine is a more flexible approach towards external information. At the moment, external information is mapped to functioncalls at the client side. This mapping is static, it needs to be added by the application maintainer. We could imagine a more dynamic approach, where the user can connect the external information to the appropriate function call. This will lead to some additional challenges where the user also needs to have the ability to inject client- 1
97 7.1. FUTURE WORK 71 side calls, like Javascript functions. Of course, we can extend the combination of ontologies and virtual worlds by adding multiple users, chat options. It would be very interesting to login, see if certain persons are online and directly have the ability to start a chat session, collaborate on documents etc. Finally, the current limitations need to be addressed. The semantic mappings and the queries to the ontologies are static, they reside in files on the server and can only be modified by the application administrator. It would be interesting to see how this can be made more flexible.
98 72 CHAPTER 7. CONCLUSION
99 Appendix A Model simplification Building B/C. The unmodified version has a file-size of more then 17 Mb when exported to VRML. The reduced version has a filesize of 119 Kb, when exported to VRML. Afterwards we apply texttures. The export to VRML now takes 131 Kb, the size of the texture is 12 Kb. In X3D we can use binary (compressed) scenes, reducing the overall size to 20 Kb. Figure A.1: Google sketchup - single building, original Sketchup version 73
100 74 APPENDIX A. MODEL SIMPLIFICATION Figure A.2: Google sketchup - single building, simplified Figure A.3: Google sketchup - single building, simplified, textures applied
Johan Van den Broeck
FACULTY OF SCIENCES Web & Information Systems Engineering Error-correction for Ontology-based Websites through a Version Log A thesis presented in fulfilment of the thesis requirement for a License Degree
JCR or RDBMS why, when, how?
JCR or RDBMS why, when, how? Bertil Chapuis 12/31/2008 Creative Commons Attribution 2.5 Switzerland License This paper compares java content repositories (JCR) and relational database management systems
MEMOIRE PRESENTE A L'UNIVERSITÉ DU QUÉBEC À CHICOUTIMI COMME EXIGENCE PARTIELLE DE LA MAÎTRISE EN INFORMATIQUE PAR WEI RAN B.A.A.
MEMOIRE PRESENTE A L'UNIVERSITÉ DU QUÉBEC À CHICOUTIMI COMME EXIGENCE PARTIELLE DE LA MAÎTRISE EN INFORMATIQUE PAR WEI RAN B.A.A. SEMANTIC WEB AND BUSINESS APPLICATION April 2011 Content Abstract - 1 Résumé
INTRODUCTION TO THE INTERNET AND WEB PAGE DESIGN
INTRODUCTION TO THE INTERNET AND WEB PAGE DESIGN A Project Presented to the Faculty of the Communication Department at Southern Utah University In Partial Fulfillment of the Requirements for the Degree
Master Thesis. Viban Terence Yuven. Harnessing The Power Of Web 2.0 For Providing A Rich User Experience In An elearning Application
Hochschule für Angewandte Wissenschaften Hamburg Hamburg University of Applied Sciences Master Thesis Viban Terence Yuven Harnessing The Power Of Web 2.0 For Providing A Rich User Experience In An elearning
Organization of Flexible User Interface for Policy Management Systems Supporting Different Rule Abstraction Levels
Faculty of Computer Science, Electronics and Telecommunication Organization of Flexible User Interface for Policy Management Systems Supporting Different Rule Abstraction Levels Ph.D. Dissertation Author: TITLE: Virtual Service AUTHOR(s): R. David Lankes PUBLICATION TYPE: Chapter DATE: 2002 FINAL CITATION: Virtual Service. Lankes, R. David (2002). In Melling, M. & Little, J. (Ed.),
Introduction to SOA with Web Services
Chapter 1 Introduction to SOA with Web Services Complexity is a fact of life in information technology (IT). Dealing with the complexity while building new applications, replacing existing applications,
Architectural patterns
Open Learning Universiteit Unit 3 Learning Unit 3 Architectural patterns Contents Introduction............................................... 35 3.1 Patterns..............................................
Detecting Inconsistencies in Requirements Engineering
Swinburne University of Technology Faculty of Information and Communication Technologies HIT4000 Honours Project A Thesis on Detecting Inconsistencies in Requirements Engineering Tuong Huan Nguyen Abstract
Vrije Universiteit Brussel Faculteit Wetenschappen Departement Informatica en Toegepaste Informatica,
THE DESIGN AND IMPLEMENTATION OF AN E-COMMERCE SITE FOR ONLINE BOOK SALES. Swapna Kodali
THE DESIGN AND IMPLEMENTATION OF AN E-COMMERCE SITE FOR ONLINE BOOK SALES By Swapna Kodali Project Report Submitted to the faculty of the University Graduate School in partial fulfillment of the requirements
Integrating Conventional ERP System with Cloud Services
1 Integrating Conventional ERP System with Cloud Services From the Perspective of Cloud Service Type Shi Jia Department of Computer and Systems Sciences Degree subject (EMIS) Degree project at the
Use of IFC Model Servers
Aalborg University Department of Production Department of Civil Engineering Aarhus School of Architecture Department of Building Design Use of IFC Model Servers Modelling Collaboration Possibilities in
Development of support web applications in.net
Development of support web applications in.net For Visit Technology Group Master of Science Thesis in Software Engineering and Technology ANDERS CLAESSON CHRISTOPHE DUBRAY Department of Computer Science
AN OBJECT-BASED SOFTWARE DISTRIBUTION NETWORK ARNO BAKKER
AN OBJECT-BASED SOFTWARE DISTRIBUTION NETWORK ARNO BAKKER COPYRIGHT c 2002 BY ARNO BAKKER The cover of this dissertation illustrates the difficulty of content moderation. VRIJE UNIVERSITEIT AN OBJECT-BASED
1Introduction to SharePoint 2010
1Introduction to SharePoint 2010 WHAT S IN THIS CHAPTER? Information about tools to integrate with Silverlight, LINQ, and BCS New features in social computing New features in ECM New features in Search
Working Paper Series of the German Data Forum (RatSWD)
Working Paper Series of the German Data Forum (RatSWD) The RatSWD Working Papers series was launched at the end of 2007. Since 2009, the series has been publishing exclusively conceptual and historical
|
http://docplayer.net/988168-Vrije-universiteit-brussel-faculteit-wetenschappen-departement-informatica-en-toegepaste-informatica.html
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
Getting Started with JAXB
The Java Architecture for XML Binding (or JAXB) is an extremely useful library for converting from a Java POJO-style object model to XML and back. In fact it has become so popular that it is now included in the J2EE platform. In this introduction, I will focus mainly on getting your Java objects into an XML representation – but this is just scratching the surface when it comes to the features of JAXB.
The first step is creating a simple Maven quickstart project and adding the following dependencies to your POM file:
<dependency> <groupId>javax.xml.bind</groupId> <artifactId>jaxb-api</artifactId> <version>2.2</version> </dependency> <dependency> <groupId>com.sun.xml.bind</groupId> <artifactId>jaxb-impl</artifactId> <version>2.2</version> </dependency>
Let’s start with a basic example. Personally, I like to start with my Java object model and work my way down. Here are my objects (getters and setters are necessary but not shown).
@XmlRootElement public class Prison { private Cell cell; private Guard guard; ... } public class Cell { private String id; private Inmate inmate; ... } public class Inmate { private String name; private String id; private String sentence; private String description; private String history; ... } public class Guard { private String name; private String assignment; ... }
Notice that only one JAXB annotation is used in my entire object model definition. All I need to do is specify the root element and JAXB will do the rest.
In order to see what XML is generated, just write a simple unit test:
public void testXml() throws JAXBException { // instantiate model Prison prison = new Prison(); Guard guard = new Guard(); guard.setName( "Jim" ); guard.setAssignment( "Toilet scrubbing" ); Inmate inmate = new Inmate(); inmate.setName( "Billy the Knife" ); Cell cell = new Cell(); cell.setId( "CB4" ); cell.setInmate( inmate ); prison.setGuard( guard ); prison.setCell( cell ); // get instance of JAXBContext based on root class JAXBContext context = JAXBContext.newInstance( Prison.class ); // marshall into XML via System.out Marshaller marshaller = context.createMarshaller(); marshaller.setProperty( Marshaller.JAXB_FORMATTED_OUTPUT, true ); marshaller.marshal( prison, System.out ); }
The resulting XML will look something like this:
<prison> <cell> <id>CB4</id> <inmate> <name>Billy the Knife</name> </inmate> </cell> <guard> <assignment>Toilet scrubbing</assignment> <name>Jim</name> </guard> </prison>
As you can see, JAXB is extremely simple to use and very quick to get started with. If you have a simple model, you only really need to know one annotation! This is a relief for those of us who have struggled with the dozens of tags in Spring and Hibernate. However, JAXB is also useful for more complex models, in which case an XML schema should be provided. I will discuss this further in the future.
|
http://arthur.gonigberg.com/2010/04/21/getting-started-with-jaxb/
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
[0.926103] No filesystem could mount root, tried: romfs
[0.9262222] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(8,1)
[0.926356] Pid: 1, comm: swapper Not tainted 3.2.45 #1
[0.926433] Call Trace:
[0.926515] [<c143cb341>] ? printk+0x1s/0x1f
[0.926593] [<c143b248>] panic+0x5c/0x138
[0.926673] [<c15f1b51>] mount_block_root+0x21e/0x23e
[0.926753] [<c1002931>] ? do_notify_resume+0x31/0x70
[0.926832] [<c10fa13c>] ? sys_mknod+0x2c/0x30
[0.926910] [<c15f1754>] ? start_kernel+0x31e/0x31e
[0.926989] [<c15f1d3a>] mount_root+0xa1/0xa7
[0.927082] [<c15fe8e>] prepare_namespace+0x14e/0x192
[0.927164] [<c10eb65>] ? sys_access+0x25/0x30
[0.927242] [<c15f187a>] kernel_init+0x126/0x12b
[0.927320] [<c1443206>] kernel_thread_helper+0x6/0x10
Warning: '/proc/partitions' does not exist, disk scan bypassed
Warning: Unable to determine video adapter in use in the present system.
Warning: Video adapter does not support VESA BIOS extensions needed for display of 256 colors. Boot loader will fall back to TEXT only operation.
Added Linux *
4 warnings were issued.
mkinitrd -c -k 3.2.45 -m ext3 -f ext3 -r /dev/sda1
[ 2.511316] EXT-3-fs (sda1): error: couldn't mount because of unsupported optional features (240)
mount: mounting /dev/sda1 on /mnt failed: Invalid argument
ERROR: No /sbin/init found on rootdev (or not mounted). Trouble ahead. You can try to fix it. Type 'exit' when things are done.
/bin/sh: can't access tty; job control turned off
/#
Error: No /lib/modules/3.2.45 kernel modules tree found for kernel "3.2.45"
slackpkg upgrade kernel-source.
|
http://www.linuxquestions.org/questions/slackware-14/installed-slack-14-updated-kernel-to-3-2-45-setup-lilo-conf-reboot-kernel-panics-4175477649/
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
Invoke java from within BPEL moduleKellie Koch Jul 14, 2010 3:31 PM
We have a JBOSS 5.1.0 server that has Riftsaw 2.1.0.CR2 with Apache ODE. I need to run a BPEL 2.0 process on that server which calls a java class instead of a web service. I have not seen anything in the BPEL Desinger that allows me to do that. I know I can expose the java class as a web service, but I was told not to go that route. Does anybody have any suggestions?
1. Re: Invoke java from within BPEL moduleJeff DeLong Jul 14, 2010 4:37 PM (in response to Kellie Koch)
Invoking a Java class from a BPEL process is not a feature of the BPEL 2.0 specification. BPEL is a standard for Web services orchestration and Riftsaw does not provide such a capability to invoke Java classes directly. If it did, it would be non-standard, and your process definition would not be portable to other BPEL product implementation.
I am not sure why you were told not to expose the Java class as a Web service, since this is pretty easy to do using JSR-181 annotations. JBoss Tools and JBoss Application Server support this capability very nicely.
On the other hand, if all of your services are Java classes, perhaps BPEL is not the right orchestration language.
2. Re: Invoke java from within BPEL moduleKellie Koch Jul 14, 2010 6:55 PM (in response to Jeff DeLong)
Thanks Jeff...I did have one other possible option I wanted to get your opinion on, and that was using Apache WSIF which gives you a WSDL with a java binding. I am assuming I could then create a BPEL partner link using that WSDL. Have you had any experience with this, and if so, do you still think I should just expose the java class as a web service instead.
3. Re: Invoke java from within BPEL moduleJeff DeLong Jul 14, 2010 9:26 PM (in response to Kellie Koch)
I don't think this would actually work, as the client in this case is RIftsaw. I don't think Riftsaw would be able to consume a WSDL with a Java binding, as this requires the client to consume the service through instantiation of and invoking methods on a Java class.
So I still think your best bet is to expose your Java class using JSR-181 annotations. Let me know if you need some advice on how to do this, and I can show you a simple example.
4. Re: Invoke java from within BPEL moduleKellie Koch Jul 15, 2010 4:43 PM (in response to Jeff DeLong)
Thanks Jeff...I do not need an example. I was able to do as you specified and use the web service annotations in my java class. I appreciate all your help!
5. Invoke java from within BPEL moduleLaura Simona May 9, 2011 10:59 AM (in response to Jeff DeLong)
Hi Jeff,
Could you please provide me an example of how are there annotations used?
Thank you,
Laura
6. Re: Invoke java from within BPEL moduleJeff DeLong May 9, 2011 11:17 AM (in response to Kellie Koch)
Here is a code snippet.
/**
* Session Bean implementation class PolicyQuoteEntityWS
*/
@WebService
@SOAPBinding(style = Style.DOCUMENT, use = Use.LITERAL, parameterStyle = ParameterStyle.WRAPPED)
@Stateless
public class PolicyQuoteEntityWS implements PolicyQuoteEntityWSLocal {
@PersistenceContext(unitName="PolicyQuoteEntity")
EntityManager entityManager;
public PolicyQuoteEntityWS() {
}
@WebMethod
@WebResult(name = "policyQuote")
@TransactionAttribute(TransactionAttributeType.REQUIRED)
public PolicyQuote createPolicyQuote(
@WebParam(name = "policyQuote") PolicyQuote policyQuote) {
entityManager.persist(policyQuote);
return policyQuote;
}
I have attached the entire example of a stateless session bean that is exposes as a web service. This SLSB uses JPA. Youo ca nimport the project into your Eclipse / JBoss Tools IDE.
- PolicyQuoteEntitySLSB.zip 16.3 K
|
https://developer.jboss.org/thread/154225?tstart=0
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
Traffic Collision 2
3D action driving game3.80 / 5.00 2,083 Views
Hero Agency
Manage and train heroes to build up a world-renowned adventure agency!3.76 / 5.00 18,298 Views
Hello Newgrounders! I'm starting work on a new series (Again XD)! I need two male voice actors to help me out!
I need a guy who can sound like VideogameDunkey and a guy who can sound like Critical. If you can, PM me with your Email address and I'll send you the script. If I think your voice is perfect for the role, then your hired!
Thanks to anyone who wants to help :D
Semi paid position, looking for voice tallent.
For full project details, please visit the URL below: eboot/
Hello voice actors of Newgrounds! I require myself one male and one female voice actor, for an animation I want to make. Now I've put all the information on this project in a neat .pdf (), but I understand this might be a bit suspicious, with virusses and what-not. If you'd feel more comfortable with a .docx I've also provided that.()
If you don't feel comfortable on downloads then you might wanna check the public version out here : osuqZjSmQmEwkVnvTCg/edit
I've added a screencap and I'm looking forward to your submissions!
Yo!,
Animator here looking for Two Voice actors for Semi periodic Animated Shorts!
One male One female, looking for Teen-ish to Young Adult Voice Ranged.
The Male would shoot for this Character: No specific voices in mind atm, so feel free to take a shot at it!.
Female voice would shoot for: , same as the male no specifics, so please take a shot =)!.
If interested or have any Questions feel free to PM me, or E-mail me at adrian@candyrag.com for more Info!
Thank you for your time~
-Adrian
Casting for Flash RPG "Primal Champions" - - Lots of Roles!
Download this Audition Packet for the Audition lines and images of the characters <audition packet.
Send the auditions to me at Redharvestng@yahoo.com
ok, i am currently making a two episode miniseries for class, it is based on a story i wrote a while back and i'm gonna need voice actors, the story is still pending and the characters names are pending changes so i will give a short description of the characters who will be in it or sure so as i can see if you can give an appropriate voice
story- the story takes place in a sort of fantasy world, its like a modern dark ages/medieval time
1-Red (between 20 and 25 years old), He is the impatient eldest of two brothers, arrogant and a very skilled swordsman and fighter. he can have any woman he wants (based on his skills and looks) but has little interest in anything other than honing and displaying his skills. he cares for his brother but doesn't express it well often leading to conflict. he has no job and makes his money as a sort of bounty hunter.
Voice- Preferably with an "im better than you" attitude that can still be changed to regretful or sympathetic without it seeming unusual
2-Blue (between 19 and 24) the genius younger of the two brothers, he quietly tends to keep to himself and rarely leaves the house. Blue is a scientist/inventor, he makes things upon request as it is his job. he wishes that he could be a fighter like his brother but any time he tries he ends up hurting himself as he is not suited for the field. he cares for yet is
often Jealous of his brother. blue's achievements are overshadowed by anything Red does. little by little his sanity depletes from his jealousy
voice- relatively shy and reserved, but can change to impatient and pushy.
voice 2- once he loses his sanity his voice must follow suit, something less deep pitch and i guess the word stringy, so not deep but not high. anyway, he will also need a laugh that shows he has lost his mind, nothing left but a soul of hate, i would prefer it if the laugh was drawn out but maybe a regular version as well
3-Advanced blue (????)
there will be a brief time gap and Blue will return (it will be blue but it also wont be blue, i cant really explain without spoiler) regardless, he will be different from his previous self greatly, his intelligence will remain the same if not increased, he is devious, deceitful and out for revenge (possibly more) his skills in combat now rival his brothers but he also has unusual abilities to add to his dangerousness.
sly, unpredictable/spontaneous and merciless, he will use anyone he needs to in order to achieve his goals. he almost always has a smile or smirk on his face and very rarely loses his calm, but when he does then it may mean trouble
once i get the script under way i can request a few lines for trials, but in the mean time if you are interested you can PM me, you can try saying some lines you make up or read somewhere in the mean time if you like
i may show you what the characters look like upon request when i get them sketched out
THE GROK SQUAD
-----
The Grok Squad is a group of sixteen little alien researchers stranded on Earth. While trying to find a way home, they decide to start learning more about our world to prepare an impromptu field report to their home culture. The show is essentially a fun educational show where each new topic is looked at through multiple lenses -- the scientific alien, the jock alien, the social alien, the artistic alien, and so on.
-----
VOICES:
One thing to remember with these characters is that they are all smart and friendly. Some are a little more introverted, others are much more outgoing. They are very curious which occasionally gets them into trouble, but they do their best not to cause trouble intentionally. Imagine a mix of an innocent inquisitive child with a trained scientist. Each has their own area of interest and unique way of looking at the world, but overall they are positive, upbeat, fun-loving, and have a thirst for knowledge.
Just as important, none of the accents should be TOO thick. These characters need to be able to explain complex topics in ways an average person can understand, and the accents, while adding flavor, should not subtract from the comprehension.
Character design artwork is available here:
If anyone has questions about like, personalities, that aren't clear from the images or descriptions, let me know!
EDIT: I have posted this same offer to my Facebook friends, but no one has specified any parts yet, so they are all still equally open and I will indicate otherwise if that changes!
MALES:
SCRUMP (Spatial/Dimensional): Brooklyn, Italian, Gangster, Gravely, New York, New Jersey (Danny Devito, Frank Stallone, Tony Clifton, Al Pacino, Chazz Palminteri)
VINDALOO (Vocal/Linguistic): Arkansas, Georgia, Politician, Deep Southern Drawl (Bill Clinton, Roscoe and Boss Hogg, Foghorn Leghorn, Futurama's Hyperchicken Lawyer)
HORNSWOGGLE (Mechanical/Dexterous): New England, Boston (Norm Abrahm's New Yankee Workshop, Peter Griffin, Alan Alda)
DIDGERIDOO (Kinesthetic/Athletic): Black, Urban (Will Smith, Michael Jordan)
MOOG (Naturalistic/Environmental): Australian, Ocker (Crocodile Hunter Steve irwin, Paul Hogan, Yahoo Derious)
YONKERS (Interpersonal/Social): Canadian, Minnesota (Bob and Doug McKenzie, Fargo, Drop Dead Gorgeous)
GIMBLE (Intrapersonal/Emotional): Midwest, Northwest, Quiet, Reserves, Friendly (Michael Cera, Tobey Maguire, Winnie the Pooh, Piglet)
IPSWITCH (Literary/Textual): British, Southern English, Formal RP (David Mitchell, Ricky Gervaise, Stephen Fry, John Cleese, Wadsworth the Robot)
FEMALES:
NUDNIK (Visual/Graphic): High rising terminal (Hippie, Valley Girl, Luna Lovegood, Cat Valentine from Victorious, Phoebe and Ursula from Friends/Mad About You, Cloudcuckoolander)
POLLIWOG (Aural/Rhythmic): Country Western (Reba McIntyre, Holly Hunter)
KATZENJAMMER (Existential/Spiritual): Southern Irish, Gaelic (Elvish)
WURTZEL (Analytical/Empirical): Queens, Yiddish, Jewish (Less Annoying Fran Drescher)
CADDYWAMPUS (Creative/Synthetical): Chicano, Latino (Carla from Scrubs)
TREACLE (Mathematical/Logical): Hindi, Indian, Pakistani (Raj -- Big Bang Theory, Samir -- Office Space)
BOROGOVE (Symbolic/Metaphorical): Jamaican (Cool Runnings)
FLINK (Factual/Memorial): Mid/Trans-Atlantic (Katherine Hepburn, Joan Crawford)
-----
AUDITION SCRIPT
NOTE: The following was voted the second most funniest joke in the world, according to Wikipedia. As it combines critical thinking with humor, it seemed a great sample script to play with animation using these characters, I'd like a full version of the script all in the same accent but with different inflections. Example: If you decide to do Wurtzel, you would do all the lines with the Queens/Yiddish accent, but she might do her regular voice for the narrator, a stern, deeper voice for Holmes, and a wistful dreamy voice for Watson, but they all would still sound like her. I'll cut and splice after I get 16 good recordings, either with one line from each character, or do a separate animation for each character telling the story, or combine two or three characters -- one as the narrator, one playing Holmes and one playing Watson. We'll see once I know what I have to work with!
NARRATOR:
Sherlock Holmes and Doctor Watson were going camping.
They pitched their tent under the stars and went to sleep.
Sometime in the middle of the night Holmes woke Watson up and said:
HOLMES:
Watson, look up at the sky, and tell me what you see.
NARRATOR:
Watson replied:
WATSON:
I see millions and millions of stars.
NARRATOR:
Holmes said:
HOLMES:
And what do you deduce from that?
NARRATOR:
Watson replied:
WATSON:
Well, if there are millions of stars,
and if even a few of those have planets,
itâEUTMs quite likely there are some planets like Earth out there.
And if there are a few planets like Earth out there,
there might also be life."
NARRATOR:
And Holmes said:
HOLMES:
Watson, you idiot, it means that somebody stole our tent!
-----
DEADLINE:
I will set an initial deadline of 11:59PM, January 31st, 2013. If I need to extend the date more, I'll do so at that time (sixteen voices is a lot to ask for!)
-----
Please send attached files OR links to files on something such as Soundcloud (don't forget to enable downloading) to my email: jasonleeholm@gmail.com
Hello everyone,
I'm attempting to make my first flash and would like the assistance of two voice actors/actress.
I would like the help of one male who will be playing the 'father' so anyone who has a fatherly, adult sounding voice would be greatly appreciated.
Secondly for the role of the babysitter I would need the assistance of a female to play her part. She has been drawn to be in her younger teens and her I think the best way to describe her would be that she's upbeat and addresses adults with good manners.
Included in the link is a rough WIP of the animation with sub titles so those interested can have a look and see if they might be willing to play the part.
Please PM me if you wish to help me with my first flash.
Cheers.
I've written a movie script and put together a team for the storyboards. An artist, 4 professional producers, 2 signed rock musicians and a composer for the Film Score.
We're currently working with a movie studio and part of the contract allows me to source outside material for the DVD concept Package that would be presented to executives and Licensee's.
The budget I'm working with is extremely tight so the payment is in experience. If all goes to plan we can negotiate payment once finances become available.
Plot: 19 year old Shaun starts off as an ordinary outcast until he unveils a secret of God's. He an his friends must decide which side they will serve during an epic battle between good and evil.!
Voice recording submissions:
"Hi my name is____(name of character)__________"
If you can do more than one voice that would be great!
I'm really new to voice acting but if you are willing to take up a female noobie.....I'm game.!
Roles that are currently Available:
Mrs. Paschar
Supporting Character: Angie (Female)
Supporting Character: Samantha (Female)
Ms. Basquali - (Female)
Mrs. Pendagras (teacher)
Henny - (Female)
Nurse's 1, 2 & 3 (Females)
At 1/14/13 04:56 PM, XTREEMMAK wrote: Semi paid position, looking for voice tallent.
For full project details, please visit the URL below: eboot/
I remember seeing this on voiceactingalliance a few times I think I even auditioned once or twice. I am sorry you either aren't finding the right voices/ losing voice actors. Your project seems very well put together and I hope you are able to see it through; I know I would like to see it.
I don't mind giving it another try, I will try to get an audition in as soon as I can.
These are the characters we have left. Please send your auditions to Natesmickle84@hotmail.ca Thanks!
Ms. Basquali - 40'S (Female)
Mrs. Pendagras - mid 30'S (teacher)
Mrs. Paschar - 40's (Female)
Henny - mid 20'S (Female)
Hello! I am looking for two voices for my new, short film:
1. A deep, villainy voice. Can be cracky.
2. A light, woman voice. Actually, it doesn't have to be all that light, it can be a comically masculine voice. :)
If you're interested, PM me and I'll send you the lines and my E-mail adress.
Thanks in advance,
- Nikolaj
> Insert naive and probably tragically foreshadowing signature here <
Hey guys, so basically as the title says I need a VA who can do a deep black man voice.
Message me if your up for the job and I will go trough auditions and pick the best one.
Cheers
Yes
1. I need a VA that can sound like Sexy adult spanish male for an Adult Swim network bumper.
I also need a VA that can sound like a sexy female for the same project.
2. "delicious!" "fresh"
Try and sound as sensuous as possible.
3. email submissions to bentfingers1@gmail.com with the subject VO
4. TONIGHT!
hey.
Gender: Male
Age: 13 (Relatively deep voice, can change pitch)
Microphone: Samson - Meteor Mic
VA History: None. New to the scene
Languages (Other than English): N/A
Accents: British, Mexican (Can do others, just not very well)
Notes: I am also a "Brony", and have no problem with cursing.
I think that's all that needs to be known.
Peace.
"I've done nothing productive all day."
fuck fuck fuck posted in the wrong thread my bad shit.
"I've done nothing productive all day."
At 1/27/13 10:21 PM, imratherdashingokay wrote: Age: 13 (Relatively deep voice, can change pitch)
Microphone: Samson - Meteor Mic
Accents: British, Mexican (Can do others, just not very well)
Notes: I am also a "Brony", and have no problem with cursing.
Peace.
You are an enormous faggot and you're posting in the wrong thread.
Quit giving the youth a bad name.
@metraff @NG_Artists Support Newgrounds Classifieds: commission animation
"Metraff I'm going to fuck you." - Sodamachine
I DONT NEED VOICE ACTORS ANYMORE, GAAH!!!
> Insert naive and probably tragically foreshadowing signature here <
I'm making a animation called "Ageless: Fall of VigiI". I'm planning this episode to be about 10-15 minutes long. I am looking for a female voice actress to play a Ninja Named Ava. Its a main role. If you're interested, PM me. I have a presentation on my page as well.
(ClimbLadders)
Hi Guys, I'm looking for Some voice actors for a short Pokemon animation. These are the VA parts:
Ash Ketchum
Misty
Battle opponent
Enraged Man
Script and storyboards are complete and can be sent to potential VA to read from. The animation is a comedy and comprises of a few short sketches that run for 1minute 20 seconds total.
Thank you in advance to anyone who can help me with this as I've been trying to get an animation made for an extremely long time!
I got bored one day and wrote a script for an audio drama based in the star wars universe. I show it to a friend who said it was good I should rewrite it and try to produce it. I will be taking a web animation course later this year so i will try and animate it once i get the sounds mixed together.
The story is about a pair of slave girls in Jabba's Palace leading up to the events of Episode 6. There are six roles all together, seven if you count the narration.
Record your auditions in .MP3 format, just be sure they are clearly labeled so I know what character you are auditioning for. Send your lines to Spectre1988@hotmail.com and make sure the e-mail is titled 'Star Wars Project lines' so I will know it's not a spam bot or something. Try to have you auditions in by February 16th. I would suggest auditioning for all of them, or all the ones you think you can manage.
I have been over the net and a few roles have been filled but there are still several open.
Female roles
Name: Lyn Me.
Age: 21
Voice type: Soft and sultry
Character: A professional singer with a great body who likes to show it off. Left home to see the galaxy and maybe meet her childhood hero Boba Fett.
Line 1: So you are the new girl.
Fortuna told me about you.
Line 2: I think they would rather watch,
and so would I.
Name: Yarna.
Age: 40-ish
Voice type: Deeper then average and soft.
Character: A dancer who's husband was killed by pirates who then sold her and her children to Jabba.
Line 1: DonâEUTMt talk like that! There is always hope. Just a little longer and it will all be over.
Line 2: It's time Oola.
Male roles
Name: Bib Fortuna.
Age: 30-ish
Voice type: Slightly higher then normal
Character; A sly and devious schemer, always looking to gain more power and back stab the competition.
Line 1: He is, most powerful person in this system and among the most influential in the Outer Rim.
Line 2: Oh great and mighty one, I bring you a rare gift from my home world.
Name: Jabba
Age: 800+
Voice Type: Very deep, will probably have to use the editor to change it to fit so don't worry too much.
Character: Powerful and evil crime lord, cruel and sadistic.
Line 1: Bo shudda!
Line 2: ChoyâEUTMsa dtay wonna wanga?
Send your audition lines to spectre1988@hotmail.com.
I need a girl (Or boy if you can do this) to voice act for me.
I need someone who can sound like a little girl that's around 5-8 years of age.
Message me inbox and I'll give you the script.
I need a girl (Or boy if you can do this) to voice act for me.
I need someone who can sound like a little girl that's around 5-8 years of age.
Message me inbox and I'll give you the script.
We're looking for narrator voice for a large RPG game. The game is comedic in tone and we're open to a range of voices. Some examples of games that have narrators similar to what we're looking for:
Bastion
The Cave
Trine
PM me if interested with a link or some other way of listening to your entry. We are willing to negotiate payment.
Thanks
I need a voice actor with a good microphone. for this animation im working on. would really be helping me out.
At 2/8/13 01:07 PM, ToonLink-PC wrote: I need a voice actor with a good microphone. for this animation im working on. would really be helping me out.
What is the animation? Can you give some more details? I have a good microphone and I'm interested. :)
At 2/8/13 01:07 PM, ToonLink-PC wrote: I need a voice actor with a good microphone. for this animation im working on. would really be helping me out.
I have top-tier microphone(s).
Could you be more specific, though?
: "Sorry, but 'FUCK.als' already exists"
At 2/8/13 01:07 PM, ToonLink-PC wrote: I need a voice actor with a good microphone. for this animation im working on. would really be helping me out.
Hello ToonLink-PC, I recently read your post which requested a voice actor with a good microphone, I have a Yeti-Blue pod-cast microphone with the ability to remove most of the background noise.
[like this]
There is one problem I do have: I read based on grammar, so you might want to consider putting your pauses in the right places, for example, if I read your post. (Reads post)
;)
|
http://www.newgrounds.com/bbs/topic/816629/91
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
Asked by:
IMPORTANT: Change in Mobiles Services Tables - ID column is now 'string'
A number of scenarios require application control over the value of the table primary key (ID), for example ability to make it globally unique. As of Friday 2013/11/22, we updated the service so that newly created tables use string as a type of its ID column instead of int.
Note: your existing applications and tables are not affected by this change. Only newly created tables will get the ID columns of type string.
To work with the tables created after this update, you need our latest SDK release that supports IDs of the type string in addition to int. You also need to use the type ‘string’ for your item’s Id type. For example,
public class MyItem { public string Id { set; get; } public string Property1 { set; get; } public DateTime Property2 { set; get; } }
If you are following instructions for a Mobile Services sample or a hands on lab that were written before this update and not updated to reflect this change, you may get a deserialization error. The sample likely tries to use the models with the property ID of type int with the newly created tables. Change the type of the Id property from int to string in the data models in the sample code to avoid the error. Please report such sample/hands on lab on this forum so that we fix it.
- Edited by CarlosFigueiraMicrosoft employee, Moderator Saturday, November 23, 2013 4:48 AM
- Edited by Kirill GavrylyukOwner Saturday, November 23, 2013 7:25 PM changed title
General discussion
All replies
- on the below Azure sample project
TodoItemAdapter.cs has a override method that will cause compile erro
se the adapter is expecting a long return type. Is there any fix for that ?
public override long GetItemId(int position)
{
return this [position].Id;
}
Hello, Carlos,
Thank you at least for writing about this. The change has been very hard to find documentation on.
The "Id column, now a string" broke one famous third party vendor's controls that I have been using for Azure mobile data synchronization.
Secondly, for those of us that would like to go back in and either modify the newly created table to have a Bigint type for Id, some documentation on creating a new table with a Bigint for Id using the command line tools (Azure CLI) would be nice.
Daniel Maxwell
Carlos,
These changes have brought about an issue in the online Database editor. When a new Azure Mobile Service is created and you create a plain new table (in this I have named it 'Item'), when you attempt to add a row to that table in the online Database editor, it denies you and gives the error "Cannot add a row in the Table Data Editor because one or more columns are required but their SQL types are not supported in Table Data Editor. Use the Transact-SQL Editor to add a row." This worked fine before these changes were rolled out. Hoping this can be resolved soon!
Cheers,
Paul
Current the CLI is the only client with the option to create a table with an integer Id.
You could also create a table in your SQL DB yourself, then create the table in the portal, in this case the portal will just see the table already exists and just expose it to you.
What are you looking to do that requires the older table model?
- I like the older table model as I can see clearly which item was added first, its easier to work with number rather than a string, also want to make my app work faster so having to transfer a huge long string as unique ID accross the web rather than a number seems more sensible. Don't know why Microsoft did this, maybe they thought it would allow for bigger tables, the data in table of my apps won't get that big.
Rodger Campbell
Carlos - you're model shows the new Id column as camel-cased. What I've experienced to date (prior to this new change) is that my ID column must be all lower case for the SDK to work and update tables properly.
Can you confirm whether this is still the case? Does the identity column on a table need to be all lower-cased (id) like it was previously? Or is it now camel cased (Id)? Thanks.
It appears that the lower-cased identity column requirement is still in place. I just created a table directly from within Azure and the identity column gets created as "id." So if you're creating your tables in SQL and then "adding" those tables to Mobile Services you it appears you still need to make sure the "id" is all lower-cased.
The classes in your code, however, can be contain a camel-cased "Id" value, as Carlos shows in his example above.
I don't know about a guide, but from my own database design experience in sqlserver, I know there is a big performance penalty at least with Dell low end servers (5,000 USD aprox).
Also I've worked with SAP Business One which uses a lot of varchar(20) primary keys. It is extremely slow when querying multiple tables.
I would suggest using vartype bigint identity for the id which can hold 9,223,372,036,854,775,807 rows. If you need more rows consider dividing into multiple tables if you can. Also you will be saving space.
Being able to use an int for a primary key was one of my main reasons to choose Azure Mobile Services instead of Parse. Thank you for still letting us use a bigint for the id, please don't change this!
The new update also creates an insert update trigger that I cannot see what it does. (Guessing it creates the value for the id and updates the create and update dates) More unnecessary processing if we already using scripts. Windows Azure Mobile is super transparent vs other mobile services, lets keep it that way!
- Edited by Jose Ines Cantu Wednesday, February 12, 2014 3:51 AM
|
https://social.msdn.microsoft.com/forums/azure/en-US/f020a4e2-9301-4318-9023-3a1668959221/important-change-in-mobiles-services-tables-id-column-is-now-string?forum=azuremobile
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
QSqlRelationalTableModel
The QSqlRelationalTableModel class provides an editable data model for a single database table, with foreign key support. More...
#include <QSqlRelationalTableModel>
Inherits: QSqlTableModel..
QSqlRelationalTableModel::~QSqlRelationalTableModel () [virtual]
Destroys the object and frees any allocated resources.
void QSqlRelationalTableModel::clear () [virtual]
Reimplemented from QSqlQueryModel::clear().
QVariant QSqlRelationalTableModel::data ( const QModelIndex & index, int role = Qt::DisplayRole ) const [virtual]
Reimplemented from QAbstractItemModel::data().
bool QSqlRelationalTableModel::insertRowIntoTable ( const QSqlRecord & values ) [virtual protected]
Reimplemented from QSqlTableModel::insertRowIntoTable().
QString QSqlRelationalTableModel::orderByClause () const [virtual protected]().
QSqlTableModel * QSqlRelationalTableModel::relationModel ( int column ) const [virtual]
Returns a QSqlTableModel object for accessing the table for which column is a foreign key, or 0 if there is no relation for the given column.
The returned object is owned by the QSqlRelationalTableModel.
See also setRelation() and relation().
bool QSqlRelationalTableModel::removeColumns ( int column, int count, const QModelIndex & parent = QModelIndex() ) [virtual]
Reimplemented from QAbstractItemModel::removeColumns().
void QSqlRelationalTableModel::revertRow ( int row ) [virtual slot]
Reimplemented from QSqlTableModel::revertRow().
bool QSqlRelationalTableModel::select () [virtual]
Reimplemented from QSqlTableModel::select().
QString QSqlRelationalTableModel::selectStatement () const [virtual protected]
Reimplemented from QSqlTableModel::selectStatement().
bool QSqlRelationalTableModel::setData ( const QModelIndex & index, const QVariant & value, int role = Qt::EditRole ) [virtual].
void QSqlRelationalTableModel::setRelation ( int column, const QSqlRelation & relation ) [virtual].
void QSqlRelationalTableModel::setTable ( const QString & table ) [virtual]
Reimplemented from QSqlTableModel::setTable().
bool QSqlRelationalTableModel::updateRowInTable ( int row, const QSqlRecord & values ) [virtual protected]
Reimplemented from QSqlTableModel::updateRowInTable().
|
http://developer.blackberry.com/native/reference/cascades/qsqlrelationaltablemodel.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
NAME
setuid - set user identity
SYNOPSIS
#include <sys/types.h> #include <unistd.h> int setuid(uid_t uid);
DESCRIPTION
setuid() sets the effective user ID of the calling process. If the effective UID of the caller is root,-engage a non-root user, and then regain root privileges afterwards cannot use setuid(). You can accomplish this with the (non-POSIX, BSD) call.
SEE ALSO
getuid(2), seteuid(2), setfsuid(2), setreuid(2), capabilities(7), credentials(7)
COLOPHON
This page is part of release 2.77 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
|
http://manpages.ubuntu.com/manpages/hardy/en/man2/setuid32.2.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Hello.
Thanks for reviewing, Francois, Jon & Jeff!
Francois Romieu wrote:
[snip]
Btw I'd simply remove the 'work' variable and schedule in an interruptible
way until the dump is done.
OK, that will take me a bit longer to code. ;)
BUG() is a bit exagerated imho.
I can put an #ifdef RTL8169_DEBUG / #endif around it, if you'd be happier.
Thanks, bye, Rich =]
--
Richard Dawe [ ]
"You can't evaluate a man by logic alone."
-- McCoy, "I, Mudd", Star Trek
|
http://oss.sgi.com/projects/netdev/archive/2005-02/msg01975.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
The QGL class is a namespace for miscellaneous identifiers in the Qt OpenGL module. More...
#include <qgl.h>
Inherited by QGLFormat, QGLContext, and QGLWidget.
List of all member functions.
Normally you can ignore this class. QGLWidget and the other OpenGL*.
* OpenGL is a trademark of Silicon Graphics, Inc. in the United States and other countries.
See also Graphics Classes and Image Processing Classes.
This enum specifies the format options.
This file is part of the Qt toolkit. Copyright © 1995-2007 Trolltech. All Rights Reserved.
|
http://idlebox.net/2007/apidocs/qt-x11-free-3.3.8.zip/qgl.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
hiber Spring - Hibernate
Struts Hibernate Spring HI Deepak,
This is reddy.i want expamle for struts hibernate spring example. Hi Friend,
Please visit the following link:
configuration - Struts
class,ActionForm,Model in struts framework.
What we will write in each of this? Hi friend,
A model represents an application?s data... in the model.The JSP file reads information from the ActionForm bean using JSP tags Hibernate Integration
Hibernate tutorial
Writing Hibernate configuration
file...
Struts Hibernate
... for this tutorial.
Downloading Struts, Hibernate and Integrate
Hi... - Struts
Hi... Hi,
If i am using hibernet with struts then require... of this installation Hi friend,
Hibernate is Object-Oriented mapping tool... more information,tutorials and examples on Struts with Hibernate visit
hibernate,struts
hibernate,struts i m using my eclipse 8.0 for hibernate appln execution.in that why doc type is mandatory in xml files(configuration and mapping0
Hi - Struts
Hi Hi friends,
must for struts in mysql or not necessary... very urgent....
Hi friend,
I am sending you a link...://
http-Hibernate-Integration - Hibernate
Struts-Hibernate-Integration Hi,
I was executing struts hibernate...)
javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
Hi Friend,
Please visit Configuration File
to map the persistent class(POJO) file to the
configuration.
Example :
Here I am...Hibernate Configuration File
In this tutorial you will learn about the hibernate configuration file.
To define a Hibernate Configuration info a resource file
Hibernate Configuration File
Example
Here we will see a simple hibernate configuration file example...Hibernate Configuration File
In this section we will read about the hibernate... connection information to a
file.
Hibernate Configuration file i.e. application
of the struts hibernate integration
application example.
Description... to show how to Integrate Struts
Hibernate and create an application. This struts... also use annotation
as mapping metadata.
About
://
Hope that the above links...struts Hi,
I am new to struts.Please send the sample code for login... the code immediately.
Please its urgent.
Regards,
Valarmathi Hi Friend
Struts Hibernate Spring - Login and User Registration - Hibernate
Struts Hibernate Spring - Login and User Registration Hi All,
I fallowed instructions that was given in Struts Hibernate Spring, Login and user... to download sql JDBC jar file which i cant remember the exact name.
Any help
struts hibernate - Hibernate
struts hibernate how to integrate struts and hibernate ?is need any plugin ?programmer manually create that plguin?
Hi.../struts-hibernate/
Thanks with hibernate - Hibernate
struts with hibernate Can u send me Realtime example of struts with hibernate(Saving,Delete,update,select from muliple tables
Eclipse configuration - Hibernate
Eclipse configuration Hi,
I installed Eclipse and Tomcat 6.0. I configured the tomcat in the eclipse. When I execute simple JSP example it use to give server need more time. I changed the time in the C:\Program Files - Hibernate
hibernate hi friends i had one doubt how to do struts with hibernate in myeclipse ide
its urgent
hibernate - Hibernate
hibernate hi i am new to hibernate.
i wan to run a select query... hibernate for use
SessionFactory sessionFactory = new Configuration... Deepak Kumar
*
*
* Select HQL Example
*/
public
Hi - Struts
Hi Hi Friends,
Thanks to ur nice responce
I have sub package in the .java file please let me know how it comnpile in window xp please give the command to compile - Struts
Struts hi,
I am new in struts concept.so, please explain example login application in struts web based application with source code .
what are needed the jar file and to run in struts application ?
please kindly
struts
struts <p>hi here is my code in struts i want to validate my...;!--
This file contains the default Struts Validator pluggable validator... in this file.
# Struts Validator Error Messages
errors.required=
Downloading Struts & Hibernate
called Struts-Hibernate-Integration.
2. Unzip Downloaded file... for extracting the
file, enter "C:\Struts-Hibernate-Integration" and click...:\Struts-Hibernate-Integration\code\WEB-INF\src\build.xml"
file in your
struts2+hibernate - Struts
struts2+hibernate How to use hibernate 3 with struts 2 application? kindly reply with example
Struts 1 Tutorial and example programs
In this tutorial I will show you how to integrate Struts and Hibernate.
After...
In this tutorial I will show you how to integrate Struts and
Hibernate... configuration file, POJO class and Tutorial.hbm.xml
(Hibernate mapping
Struts Articles
Struts configuration file(s) mappings and the JAAS security framework policy file... is isolated from the user).
Bridge the gap between Struts and Hibernate
Hibernate and Struts are currently among the most popular open
Struts 2 + Hibernate
Struts 2 + Hibernate From where can i get good tutorial from integrating struts 2 with hibernate , which explains step by step procedure in integration and to build web first example - Struts
Struts first example Hi!
I have field price.
I want to check...!
I am using struts 2 for work.
Thanks. Hi friend,
Please visit... of validation.xml file to resolve it?
thanks you so much! Hi friend,
Plz specify
hibernate how to impot mysql database client jar file into eclipse for hibernate configuration
Hi Friend,
Please visit the following link:
Thanks
Multiple file upload - Struts
Multiple file upload HI all,
I m trying to upload multiple files using struts and jsp.
I m using enctype="multipart". and the number of files... files uploaded
Thanks HI
The code above will work+hibernate
struts+hibernate org.hibernate.InvalidMappingException: Could not parse mapping document from resource roseindia/net/hibernate/Address.hbm.xml
Struts 2 zero configuration,Struts 2 zero configuration Example
Struts 2 Zero Configuration
This section discusses Struts 2 zero configuration feature with example. The Struts 2 zero configuration
is another very... Configuration" Struts 2 application uses annotations to
register the actions
struts and hibernate integration
struts and hibernate integration i want entire for this application using struts and hibernate integration
here we have to use 4 tables i.e... the following link:
Struts Hibernate Integration
Hibernate configuration file
This tutorial helps you in understanding the configuration file of Hibernate
Writing Hibernate Configuration Files
need following Hibernate configuration files:
Hibernate Configuration File
Hibernate configuration file (hibernate.cfg.xml) is used to provide the
information... configuration file.
Here is the code of our Hibernate Configuration File
hibernate - Hibernate
Hi Radhika,
i think, you hibernate configuration...hibernate I have written the following program
package Hibernate;
import org.hibernate.Session;
import org.hibernate.*;
import
struts
struts hi
i would like to have a ready example of struts using "action class,DAO,and services" for understanding.so please guide for the same.
thanks Please visit the following link:
Struts Tutorials
struts - Framework
struts Hi,roseindia
I want best example for struts Login form..Don't integrate springs and hibernate with this example i want only in struts... Hi Friend,
You can get login applications from the following
Java Compilation error. Hibernate code problem. Struts first example - Hibernate
Java Compilation error. Hibernate code problem. Struts first example Java Compilation error. Hibernate code problem. Struts first example
struts
struts hi
i would like to have a ready example of struts using"action class,DAO,and services"
so please help me
Which extension i use for save hibernate file.
Which extension i use for save hibernate file. Please give me answer that which extension i use for save the hibernate file and how i run that file... the class object to the database table. Another is the hibernate configuration file
Struts 2 File Upload error
Struts 2 File Upload error Hi! I am trying implement a file upload using Struts 2, I use this article, but now the server response the error... solve this?
Hi Friend,
Please visit the following link:
File
Developing Struts Hibernate Plugin
the
name of Hibernate Configuration file.
private String _configFilePath...
Developing Struts Hibernate Plugin
... for Struts Hibernate Plugin.
Our Hibernate Plugin will create Hibernate Session.
struts
struts hi
Before asking question, i would like to thank you... technologies like servlets, jsp,and struts.
i am doing one struts application where i... into the database could you please give me one example on this where i i have
Hibernate
Hibernate Hi i have 2 doubbts regarding Hibernate ,..
1)Can we rename hibernate.cfg.xml?
2? can we use multiple mapping resource in hibernate.cf.xml file ?
pls let me know soon
example on struts - Struts
example on struts i need an example on Struts, any example.
Please help me out. Hi friend,
For more information,Tutorials and Examples on Struts visit to :
Thanks
Action Configuration - Struts
Action Configuration I need a code for struts action configuration in XML
Struts 2.2.1 - Struts 2.2.1 Tutorial
in Struts application
Example of File Upload Interceptor
How... and testing the example
Advance Struts Action
Struts Action...
About Struts 2.2.1 Login application
Create JSP file
Create
Based on struts Upload - Struts
Based on struts Upload hi,
i can upload the file in struts but i want the example how to delete uploaded file.Can you please give link
struts
struts <p>hi here is my code can you please help me to solve...;
<h1></h1>
<p>struts-config.xml</p>
<p>...;<struts-config>
<form-beans>
<form-bean name
Struts Configuration file - struts.xml
Struts Configuration file - struts.xml
....
The struts.xml File
The Struts 2 Framework uses a configuration file (struts.xml... in the class path of your web
application. Features of struts 2 configuration file,
what is meant by struts-config.xml and wht are the tags... and search you get the jar file Hi friend,
struts.config.xml : Struts has a configuration file to store mappings of actions.
By using this file
hi... - Struts
hi... Hi Friends,
I am installed tomcat5.5 and open the browser and type the command but this is not run please let me... also its very urgent Hi Soniya,
I am sending you a link. I hope
Struts file uploading - Struts
Struts file uploading Hi all,
My application I am uploading... can again download the same file in future.
It is working fine when I... I upload the large size file like 10 mb.
Now my requirement is
|
http://www.roseindia.net/tutorialhelp/comment/2528
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
This page lists the most important API changes between Eigen2 and Eigen3, and gives tips to help porting your application from Eigen2 to Eigen3.
In order to ease the switch from Eigen2 to Eigen3, Eigen3 features Eigen2 support modes.
The quick way to enable this is to define the
EIGEN2_SUPPORT preprocessor token before including any Eigen header (typically it should be set in your project options).
A more powerful, staged migration path is also provided, which may be useful to migrate larger projects from Eigen2 to Eigen3. This is explained in the Eigen 2 support modes page.
The USING_PART_OF_NAMESPACE_EIGEN macro has been removed. In Eigen 3, just do:
This is the single trickiest change between Eigen 2 and Eigen 3. It only affects code using
std::complex numbers as scalar type.
Eigen 2's dot product was linear in the first variable. Eigen 3's dot product is linear in the second variable. In other words, the Eigen 2 code
is equivalent to the Eigen 3 code
In yet other words, dot products are complex-conjugated in Eigen 3 compared to Eigen 2. The switch to the new convention was commanded by common usage, especially with the notation
for dot products of column-vectors.
Notice that Eigen3 also provides these new convenience methods: topRows(), bottomRows(), leftCols(), rightCols(). See in class DenseBase.
In Eigen2, coefficient wise operations which have no proper mathematical definition (as a coefficient wise product) were achieved using the .cwise() prefix, e.g.:
In Eigen3 this .cwise() prefix has been superseded by a new kind of matrix type called Array for which all operations are performed coefficient wise. You can easily view a matrix as an array and vice versa using the MatrixBase::array() and ArrayBase::matrix() functions respectively. Here is an example:
Note that the .array() function is not at all a synonym of the deprecated .cwise() prefix. While the .cwise() prefix changed the behavior of the following operator, the array() function performs a permanent conversion to the array world. Therefore, for binary operations such as the coefficient wise product, both sides must be converted to an array as in the above example. On the other hand, when you concatenate multiple coefficient wise operations you only have to do the conversion once, e.g.:
With Eigen2 you would have written:
In Eigen 2 you had to play with the part, extract, and marked functions to deal with triangular and selfadjoint matrices. In Eigen 3, all these functions have been removed in favor of the concept of views:
Some of Eigen 2's matrix decompositions have been renamed in Eigen 3, while some others have been removed and are replaced by other decompositions in Eigen 3.
The Geometry module is the one that changed the most. If you rely heavily on it, it's probably a good idea to use the Eigen 2 support modes to perform your migration.
In Eigen 2, the Transform class didn't really know whether it was a projective or affine transformation. In Eigen 3, it takes a new Mode template parameter, which indicates whether it's Projective or Affine transform. There is no default value.
The Transform3f (etc) typedefs are no more. In Eigen 3, the Transform typedefs explicitly refer to the Projective and Affine modes:
In Eigen all operations are performed in a lazy fashion except the matrix products which are always evaluated into a temporary by default. In Eigen2, lazy evaluation could be enforced by tagging a product using the .lazy() function. However, in complex expressions it was not easy to determine where to put the lazy() function. In Eigen3, the lazy() feature has been superseded by the MatrixBase::noalias() function which can be used on the left hand side of an assignment when no aliasing can occur. Here is an example:
However, the noalias mechanism does not cover all the features of the old .lazy(). Indeed, in some extremely rare cases, it might be useful to explicit request for a lay product, i.e., for a product which will be evaluated one coefficient at once, on request, just like any other expressions. To this end you can use the MatrixBase::lazyProduct() function, however we strongly discourage you to use it unless you are sure of what you are doing, i.e., you have rigourosly measured a speed improvement.
The EIGEN_ALIGN_128 macro has been renamed to EIGEN_ALIGN16. Don't be surprised, it's just that we switched to counting in bytes ;-)
The EIGEN_DONT_ALIGN option still exists in Eigen 3, but it has a new cousin: EIGEN_DONT_ALIGN_STATICALLY. It allows to get rid of all static alignment issues while keeping alignment of dynamic-size heap-allocated arrays, thus keeping vectorization for dynamic-size objects.
A common issue with Eigen 2 was that when mapping an array with Map, there was no way to tell Eigen that your array was aligned. There was a ForceAligned option but it didn't mean that; it was just confusing and has been removed.
New in Eigen3 is the Aligned option. See the documentation of class Map. Use it like this:
There also are related convenience static methods, which actually are the preferred way as they take care of such things as constness:
In Eigen2,
#include<Eigen/StdVector> tweaked std::vector to automatically align elements. The problem was that that was quite invasive. In Eigen3, we only override standard behavior if you use Eigen::aligned_allocator<T> as your allocator type. So for example, if you use std::vector<Matrix4f>, you need to do the following change (note that aligned_allocator is under namespace Eigen):
In Eigen2, global internal functions and structures were prefixed by
ei_. In Eigen3, they all have been moved into the more explicit
internal namespace. So, e.g.,
ei_sqrt(x) now becomes
internal::sqrt(x). Of course it is not recommended to rely on Eigen's internal features.
|
http://eigen.tuxfamily.org/dox/Eigen2ToEigen3.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
In a previous tip, we discussed how to leverage Property Controls to create a Tip 10 list based off various categories. This tip assumed the data was setup in such a way that each category was its own column.
(For example, every row had a customer id, as well as a how much that customer shopped in each of the 6 departments (clothing, furniture, toys, groceries, electronics, and garden) and we wanted to find the top 10 customers for any given department selected by the user in a drop down)
What if the data is setup differently so that each category is in a single column? We can accomplish the same output, but will need to follow a few different steps.
Assume our data is structured like the Data Table below. . .
. . . and we want to create a Bar Chart that allows us to display the top 3 groups from the ‘Group’ Column based off a specific value for ‘Cat A’. For example, which 3 groups with the value for ‘Cat A’ of True had the most ‘Value’.
To accomplish this, we first need to create a Drop-down list Property Control to display unique values from the ‘Cat A’ column.
Using what we learned from another previous tip, we then create a calculated column which will either output ‘Yes’ or ‘No’, depending on whether the value from ‘Cat A’ matches the value selected by the user in the drop down we just created.
if(find([Cat A],"${CatAFilter}")>0,"Yes", "No" )
We should then uncheck the ‘No’ value in the Filter Panel for this newly created Calculated Column. Now, the resulting data only shows values that match the drop down. If the user selects ‘False’ in the drop down, then only rows in ‘Cat A ‘ where the value is ‘False’ should be displayed as shown below.
As we mentioned in the previous tip , it is a good idea to hide the Filter for the newly created calculated column, so that users in the Web Player do not accidentally update or reset it.
Once we have this, we now need to rank the remaining rows. To accomplish this, we will create another calculated column. This will use the Rank function, to rank in descending order the ‘Value’ Column grouped by the ‘dynamicFilter’ calculated column we just created.
Rank([Value],"desc",[dynamicFilter])
Next, just like in the previous tip, we create a Bar Chart to show the ‘Group’ values on the category axis, and on the value axis, we use the ‘Value’ column. We also configure the Bar Chart to sort the Bars by height. We then use the ‘Limit Data Using Expression’ property to display only the top 3 Groups.
if ([Top 3]<4, True, False)
To extend this, if you wanted to display the Top 3 based off values from both ‘Cat A’ and ‘Cat B’ columns, you would create another Dropdown list Property Control to display unique values in the ‘Cat B’ column and you would update the ‘dynamicFilter’ Calculated Column to the following expression:
if(find([Cat A],"${CatAFilter}")>0 AND find([Cat B],"False")>0,"Yes", "No" )
Interested in testing your skills to see how much you know about authoring in Spotfire? Try our newly released Spotfire Author Assessment. It is a 60-question exam covering all topics related to authoring and report development in TIBCO Spotfire Professional. The exam is hands-on and requires students to not only understand available features and functions, but also how to navigate the Spotfire Professional User Interface , and how to take data and business questions and come up with solutions using TIBCO Spotfire. The exam requires students to have TIBCO Spotfire Professional 4.x or higher installed.
Throughout this tip of the week series, we have displayed many solutions which utilized the combination of Property Controls and Script Controls. These two features, when used together, can create endless solutions in Spotfire. This week we will look at one related to transforming data.Many times when a user loads their data in an exploratory fashion, they realize it is not in the correct format required for analysis. Spotfire does have a series of ‘Data Transformations’ that can be applied when you load data, like Pivot, Unpivot, etc…but there are a few scenarios where these will not work. One specific scenario is when data comes in from a log or similar and a single column includes a comma-separated list of values, like 342,234,324,546, which need to be broken down into separate columns for each value (4 columns in this case).
We can use the powerful Regular Expression functions included in our Calculation Column expressions to parse the column, but that will only work when you have a known quantity of delimiters in each column and even then you would have to write the expression for each new column desired manually.
If you wish to automatically detect the number of delimiters and then loop through to create multiple new columns at once, you can utilize the combination of a Property Control and a Script Control.
First, we createa a Drop-down list Property Control which allows you select which column to transform.
Then we create a Script Control to break up the selected Column into multiple Columns based off the comma delimiter. The Script will first create a Column that counts the number of delimiters for each row. It will then loop through all rows in the dataset and create a new Column for each value before each delimiter.So, for the value 342,234,324,546, the result will be one new Column that counts the number of delimiters( 4 in this case) , and then 4 new Columns, for the specific values. For this specific row, the values would be 342, 234, 324, and 546.
The Script to accomplish this is shown below. It assumes the Property which is attached to the Drop down list Property Control is called 'myColumnSelection'
curDT = Document.ActiveDataTableReference cols = curDT.ColumnstargetCol = Document.Properties["myColumnSelection"]#Create a new column that counts the comma delimitermyExpression = '1+len(RXReplace(string([${myColumnSelection}]),"([A-Za-z0-9]+)","","g"))'myNewColName = cols.CreateUniqueName("NumElements")cols.AddCalculatedColumn(myNewColName, myExpression)
#Get max number of elementsmaxElements = curDT.Columns.Item[myNewColName].RowValues.GetMaxValue().ValidValue if maxElements > 1: #Generate Columns upto but not the last item index = 1 while index < maxElements: myExpression = 'RXReplace([C],"((\\\d+)[,]){' + str(index) + '}.*","$2","")' newCol = targetCol + str(index) myNewColName = cols.CreateUniqueName(newCol) cols.AddCalculatedColumn(myNewColName, myExpression) index = index + 1 #Generate Column for last item myExpression = 'RXReplace([C], "((\\\d+),){' + str(index-1) + '}(.*)", "$3", "")' newCol = targetCol + str(index) myNewColName = cols.CreateUniqueName(newCol) cols.AddCalculatedColumn(myNewColName, myExpression)
While this solution may work well for ad hoc analytics, the down side is that the data is loaded first in Spotfire, and then transformed. A more production ready version of this would leverage the Spotfire SDK to build a Custom Data Transformation. With this approach, the transformation happens before the data is loaded and displayed in Spotfire Professional. In addition, it would automatically be re-applied as data is reloaded or replaced.
Many times, users would like to collaborate with each other in the same Analysis file but don't have a specific collaboration tool, like TIBBR () to use. By leveraging the power of Property Controls and Script Controls, you can create a page on your Analysis file, which allows users to view and share comments ( which works in both the Web Player and Professional client).
First we need to setup an input box to capture the user’s comments. We do this with an ‘Input Field (multiple lines)’ Property Control:
We can also add a descriptive heading above the Input field as shown below:
After that, we need to create a place to store the comments. For this, we create a Document Property and attach it to a ‘Label’ Property Control. To maximize real estate , rather than putting the Label Property Control underneath the Input Field Property Control, we leveraged the concepts discussed in an early tip, to place them in a HTML table side by side.
In addition, we were able to style the Label by putting a border around it so its easily visible to the user: The final step is to create a Script which will take the comments entered by the user in the Input Field Property Control and save them to the Document Property we just created (which will update the display in the Label Property Control).
Below is the Script. It assumes the Document Property for the saved comments (attached to the Label) is called savedComments and it assumes the Document Proeprty for the comments the user currently entered (the input field Property Control) is called inputComment
from System import DateTime, Threading strComments = Document.Properties["savedComments"]strUserInput = Document.Properties["inputComment"]# Get the current time ie 09/15/2012 08:17:31timestamp = DateTime.Now.ToString("MM/dd/yyyy HH:mm:ss")# Get the currently logged in user’s usernameusername=Threading.Thread.CurrentPrincipal.Identity.Name# Create the comment lines by appending the timestamp and login username following by the comment on a separatelinecommentToAppend = timestamp + " - " + username + "\n" + strUserInputif strComments == "": newLine = ""else: newLine = "\n\n"strAppendedComments= strComments + newLine + commentToAppend#Append the comment lines to the savedComments Document PropertyDocument.Properties["savedComments"] = strAppendedComments#Clear the Input Field Property Control of the previous inputDocument.Properties["inputComment"] = ""When executed, this script will capture the user's login name, the timestamp, and then the comments and append to the the savedComments Property. If you wish to prepend comments (so the newest show up on top) change the following line in the script:
strAppendedComments= strComments + newLine + commentToAppend
to
strAppendedComments= commentToAppend + strComments + newLine Since Document Properties are automatically persisted within the Spotfire Analysis file, the comments are saved and stored.
Interested in learning how to build your own solutions like this? Please consider taking our SP232 Automation APis with Iron Python course using our Mentored Online Training delivery model.
We’re back!
This tip will be the first one of our fall series. Thank you to everyone who reached out via email to check on the status of the Tip of the Week Blog. It was nice to see how many followers we have who wanted the tips to return.
This week we look at how to use Property Controls and Calculated Columns to create a plot which dynamically updates to show the top ten values of a category, where the category can be selected via a Drop-down Property Control.
Assume we are working with data which shows sales across various departments and the departments are what we want to use as values in the Drop-down. We should create a Drop-down list Property Control which sets values through 'Column selection'.
Notice in the image, in the ‘selectable columns’ field, we chose to manually set which columns to include by writing the following expression:
Name:Electronics OR Name:garden OR Name:Groceries OR Name:Toys OR Name:Furniture OR Name:Clothing
This will ensure that only the departments are shows as options in the Drop-down list. Once we have this created, we now need to create a Calculated Column. This column should rank the department selected from the Drop-down list.
Assuming the Property attached to the Property Control we just created is called whichDepartments, the expression would look like the following:
Rank([${whichDepartments}],"desc") As [Dynamic Rank]
Name the new Calculated Column ‘Dynamic Rank’. Then create a Bar Chart. The Category Axis, for our data set, should be set to 'Customer ID', since we want to show the top ten customers. The Value Axis should be dynamically updated to be the Sum of whatever department is selected in the Drop-down.
To do this, right click on the Value Axis and select ‘Custom Expression’. In the resulting dialog, enter the following expression:
Sum(${whichDepartments})
We then need to make sure the Bar Chart only shows the top ten values. To do this, starting in TIBCO Spotfire 4.0, there is the ability to limit data directly inside an visualization. From the properties dialog, select the ‘Data’ menu and on the bottom, click the ‘Edit’ button next to the ‘Limit data using expression:’ item.
The expression you should add is :
if([Dynamic Rank]<11,True,False)
What this expression will do is only show data where the ‘Dynamic Rank’ column is less than 11 (so the top 10). The ‘Dynamic Rank’ column will update dynamically to re-rank based off what is selected in the Property Control Drop-down. The end result is an analysis file which allows the consumer to select a department and then have the Bar Chart dynamically update to show the top 10 customers. This is much more efficient than creating a Bar Chart for each department.
To learn more about how to use any of the functionality explained in this tip, please consider taking any of our Mentored Online Training courses.
|
http://spotfirecommunity.tibco.com/community/blogs/tips/archive/2012/09.aspx
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
#include <Epetra_SerialDenseSVD.h>
Inheritance diagram for Epetra_SerialDenseSVD:
The Epetra_SerialDenseSVDVDVD.
Constructing Epetra_SerialDenseSVD Objects
There is a single Epetra_SerialDenseSVD.
|
http://trilinos.sandia.gov/packages/docs/r8.0/packages/epetra/doc/html/classEpetra__SerialDenseSVD.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
UnTunes:We will mock you!
From Uncyclopedia, the content-free encyclopedia
One of the best rock ballads ever to be performed by the Uncyclopedia Band that was composed and lyrics written by Oscar Wilde are sung to those who would dare invade Uncyclopedia and blank and vandalize pages, or make demands that admins unprotect pages or delete pages that offend them. It is, in fact, sung 38 times daily to people who vandalize this very page you are reading right now. If you haven't heard this song already, you must have never had Internet access.
edit History
Long ago, Oscar Wilde knew that one day, morons dumb enough to dare take on his creation and followers would need a song be sung to them. It is said that music often calms the savage beast, and these invaders to Uncyclopedia are beasties indeed! That one day, an Unsongs namespace might be created to add in this song in mp3 and ogg formats along with the lyrics and vocals by Uncyclopedia admins and members to be played for these invading sockpuppet hordes.
edit The Legend
This rock ballad is so powerful that none dare stand it, save for Benson who is better than all of us anyway. No World Order, no Anonymous user, no dynamic IP dare resist this song of legend. It is played all over the Internet, and it is said that there are those who fear it. There are those who cannot stand it, for it is a righteous song, so righteous that it is truthful and it turns away the undead and the idiotic (who lack brains anyway like the undead) back to whence they came.
edit The Lyrics
Slashy you're a boy make a big mess
Blankin' in the article gonna be an idiot some day
You got spud on yo face
You big disgrace
Leavin' your slashes all over the place
We will we will mock you
We will we will mock you
Powershot you're a young man 'tard man
Writin' in the forum calling everyone gay
You got blood on yo face
You big disgrace
Whorin' your articles all over the place
We will we will mock you
We will we will mock you
Conspiracy you're an odd man bore man
Threatenin' with your lies gonna make you some cliché some day
You got spud on your face
You big disgrace
Admins better block you back with some mace
We will we will mock you
We will we will mock you
|
http://uncyclopedia.wikia.com/wiki/UnTunes:We_will_mock_you!?oldid=5295438
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
14 September 2012 20:13 [Source: ICIS news]
Correction: In the ICIS story headlined “SABIC’S Al Jubail plant to start up in Q1 2013” dated Friday 14 September 2012, please read the headline as "SABIC’s Al Jubail fatty alcohols to come online late 2013". In the story text, there were numerous errors throughout. An updated and corrected story follows.
BUDAPEST (ICIS)--SABIC’s new fatty alcohol plant at Al Jubail, Saudi Arabia, will come online towards the end of 2013 to supply feedstock for its ethoxylation plant, the company’s regional business manager said on Friday.
Speaking at the 1st ICIS European Surfactants Conference in ?xml:namespace>
Brandao attributed the increased demands to higher living standards, together with a surge in population growth in the MENA area.
“This increased purchasing power will leverage the region’s consumption to levels seen in mature markets,” he added.
SABIC’s ethoxylation plant currently has a nameplate capacity of 40,000 tonnes/year. It started operating at the beginning of April 2012 with a basic portfolio, which has since been increasing stepwise towards specialties.
The plant’s capacity will be expanded by 2013 and new projects in different locations are under
|
http://www.icis.com/Articles/2012/09/14/9595814/corrected-sabics-al-jubail-fatty-alcohols-to-come-online-late-2013.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
#include <db.h> DB_MULTIPLE_INIT(void *pointer, DBT *data);
If either of the DB_MULTIPLE or DB_MULTIPLE_KEY flags were specified to the DB->get() or DBcursor->get() methods, the data DBT returned by those interfaces will refer to a buffer that is filled with data. Access to that data is through the DB_MULTIPLE_* macros.
This macro initializes a variable used for bulk retrieval.
The data parameter is a DBT structure returned from a successful call to DB->get() or DBcursor->get() for which one of the DB_MULTIPLE or DB_MULTIPLE_KEY flags were specified.
|
http://idlebox.net/2011/apidocs/db-5.2.28.zip/api_reference/C/DB_MULTIPLE_INIT.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Let's return now to our example, in which we are creating a new stream template by derivation.
Let us derive a new stream type odatstream that has an additional data member fmt_ for storing a date format string, together with a corresponding member function fmt() for setting the date format specification.
template <class charT, class Traits=std::char_traits<charT> > class odatstream : public std::basic_ostream <charT,Traits> { public: odatstream(std::basic_ostream<charT,Traits>& ostr, const char* fmt = "%x") //1 : std::basic_ostream<charT,Traits>(ostr.rdbuf()) { fmt_=new charT[std::strlen(fmt)]; std::use_facet<std::ctype<charT> >(ostr.getloc()). widen(fmt, fmt+std::strlen(fmt), fmt_); //2 } std::basic_ostream<charT,Traits>& fmt(const char* f) //3 { delete[] fmt_; fmt_=new charT[std::strlen(f)]; std::use_facet<std::ctype<charT> >(os.getloc()). widen(f, f+std::strlen(f), fmt_); return *this; } charT const* fmt() const //4 { charT * p = new charT[Traits::length(fmt_)]; Traits::copy(p,fmt_,Traits::length(fmt_)); return p; } ~odatstream() { //5 delete[] fmt_; } private: charT* fmt_; //6 template <class charT, class Traits> //7 friend std::basic_ostream<charT, Traits> & operator << (std::basic_ostream<charT, Traits >& os, const date& dat); };
We would like to be able to insert date objects into all kinds of output streams. Whenever the output stream is a date output stream of type odatstream, we would also like to take advantage of its ability to carry additional information for formatting date output. How can this be achieved?
It would be ideal if the inserter for date objects were a virtual member function of all output stream classes that we could implement differently for different types of output streams. For example, when a date object is inserted into an odatstream, the formatting would use the available date formatting string; when inserted into an arbitrary output stream, default formatting would be performed. Unfortunately, we cannot modify the existing output stream classes, since they are part of a library you will not want to modify.
This kind of problem is typically solved using dynamic casts. Since the stream classes have a virtual destructor, inherited from class std::basic_ios, we can use dynamic casts to achieve the desired virtual behavior.
<!><!>
NOTE -- For a more detailed discussion of the problem and its solution, see Section 14.2, p. 306ff, of Bjarne Stroustrup, The Design and Evolution of C++, Addison-Wesley 1994.
Here is the implementation of the date inserter:
template <class charT, class Traits> std::basic_ostream<charT, Traits> & operator << (std::basic_ostream<charT, Traits >& os, const date& dat) { std::ios_base::iostate err = std::ios_base::goodbit; try { typename std::basic_ostream<charT, Traits>::sentry opfx(os); if (opfx) { charT buf[3]; const charT *fmt = buf; odatstream<charT, Traits> *p = dynamic_cast<odatstream<charT, Traits>*>(&os); //1 if (p) fmt = p->fmt_; //2 else { //3 char patt[3] = "%x"; std::use_facet<ctype<charT> >(os.getloc ()) .widen (patt, patt + 3, buf); } typedef std::ostreambuf_iterator<charT, Traits> Iterator; typedef std::time_put<charT, Iterator> TimePut; if (std::use_facet<TimePut>(os.getloc()) .put (os, os, os.fill (), &dat.tm_date_, fmt, fmt + Traits::length (fmt)).failed()) err = ios_base::badbit; os.width (0); } } catch (...) { bool flag = false; try { os.setstate (std::ios_base::failbit); } catch (std::ios_base::failure) { flag = true; } if (flag) throw; } if (err) os.setstate(err); return os; }
The date output stream has a member function for setting the format specification. Analogous to the standard stream format functions, we would like to provide a manipulator for setting the format specification. This manipulator affects only output streams. Therefore, we must define a manipulator base class for output stream manipulators, osmanip, along with the necessary inserter for this manipulator. We do this in the code below. See Section 33.3.3 for a detailed discussion of the technique we are using here:
template <class Ostream, class Arg> class osmanip { public: osmanip(Ostream& (*pf)(Ostream&, Arg), Arg arg) : pf_(pf) , arg_(arg) { ; } protected: Ostream& (*pf_)(Ostream&, Arg); Arg arg_; friend Ostream& operator<< (Ostream& ostr, const osmanip &manip) { return (*manip.pf_)(os, manip.arg_); } };<!>
After these preliminaries, we can now implement the setfmt manipulator itself:
template <class charT, class Traits> inline std::basic_ostream<charT,Traits>& sfmt(std::basic_ostream<charT,Traits>& os, const char* f) //1 { odatstream<charT,Traits>* p = dynamic_cast<odatstream<charT,Traits>*>(&os); //2 if (p) p->fmt(f); //3 return os; //4 } template <class charT,class Traits> inline osmanip<std::basic_ostream<charT,Traits>,const char*> setfmt(const char* fmt) //5 { return osmanip<std::basic_ostream<charT,Traits>, const char*>(sfmt,fmt); }<!>
The solution suggested in Section 38.4.3 uses dynamic casts and exception handling to implement the date inserter and the date format manipulator. Although this technique is elegant and makes proper use of the C++ language, it might introduce some degradation in runtime performance due to the use RTTI (Run-Time Type Identification).<!>
If optimal performance is important, you can choose an alternative approach: in the proposed solution that uses dynamic casts, extend the date inserter for arbitrary output streams basic_ostream<charT,Traits>& operator<< (basic_ostream <charT,Traits>&, const date&) so that it formats dates differently, depending on the type of output stream. Alternatively, you can leave the existing date inserter for output streams unchanged and implement an additional date inserter that works for output date streams only; its signature would be odatstream<charT,Traits>& operator<< (odatstream<charT,Traits>&, const date&). Also, you would have two manipulator functions, one for arbitrary output streams and one for output date streams only, that is, basic_ostream<charT,Traits>& sfmt(basic_ostream<charT,Traits>&, const char*) and odatstream<charT,Traits>& sfmt
(odatstream<charT,Traits>&, const char*). In each of the functions for date streams, you would replace those operations that are specific for output date streams.
This technique has the drawback of duplicating most of the inserter's code, which in turn might introduce maintenance problems. The advantage is that the runtime performance is unlikely to be adversely affected.
|
http://stdcxx.apache.org/doc/stdlibug/38-4.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
j WHEN WILL repaint() of CustomItem will call automatically
J2ME
J2ME how to create table using four field in RMS - MobileApplications
J2me Hi, I would like to know how to send orders linux to a servlet which renvoit answers to a midlet. thank you
;|
J2ME Servlet...
Map | Business Software
Services India
J2ME Tutorial Section
Java
Platform Micro Edition |
MIDlet Lifecycle J2ME
|
jad and properties file
J2ME code - MobileApplications
...
user enter name and pwd to J2ME code...j2me cl the servlet..and servlet...J2ME code hi...
i'm facing problem while connecting J2ME code to servlet..
i want to know how to request servlet and how to get the response from
j2me application
j2me application code for mobile tracking system using j2me app with servlets
J2me app with servlets Can we send and receive message from our servlet website to mobile? if yes,then how..
without using any router..code plz??
Please visit the following link:
j2me solution - MobileApplications
j2me solution Hi friends,
In one of my mobile application i am... those values in mysql database via a servlet.
i am using double lat... =87.555657,when i tried display this value in servlet,it is showing 1.506276030234...E-308
j2me pgrm run
j2me pgrm run How to run a j2me program
Please visit the following link:
J2ME Tutorials
How to access (MySQL)database from J2ME?
How to access (MySQL)database from J2ME? I am new to J2ME. I am using NetBeans.
Can anyone help me?
How to access (MySQL)database from J2ME?
( I search a lot I found that there is need to access database through servlet
how to connect j2me program with mysql using servlet?
how to connect j2me program with mysql using servlet? my program of j2me
import java.io.*;
import java.util.*;
import javax.microedition.midlet.... the response from the servlet page.
DataInputStream - MobileApplications
j2me code how to write a j2me calendar using alert form on the special day Hi Friend,
Please visit the following link:
Hope that it will be helpful for you
j2me mysql connectivity
j2me mysql connectivity I do a project on reservation using j2me. I need a connectivity to a MYSQL database.
How do I do this
Thanks and regards
Karthiga
Emulator for j2me in eclipse
Emulator for j2me in eclipse I want to run J2me application in eclipse. For that i need an emulator..but i can not get it any how.
|
http://roseindia.net/tutorialhelp/comment/85653
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
HDS, Cleversafe and Nirvanix boost their cloud storage
Established storage vendor Hitachi Data Systems (HDS) and new vendors Cleversafe Inc. and Nirvanix recently agreed to reinforce their cloud storage
HDS deployed a Private File Tiering solution, which the vendor says is the initial offering of a planned family of cloud solutions. Cleversafe deployed Version 2.1 of its object-based software that offers a basis for service vendors and government organizations to provide cloud storage services. With this latest offering, the startup is rendering its software available separately in lieu of solely on an appliance. Nirvanix launched its hNode hybrid cloud storage solution that gathers files in the customer's data center and replicates to Nirvanix's offsite Storage Delivery Network (SDN) cloud.
According to analysts, every kind of storage provider is working on packaging their offerings for cloud implementations.
"Vendors are setting the foundation for selling products through cloud service providers, and becoming cloud service providers themselves," Rick Villars, vice president of storage systems and executive strategies at IDC, said.
See this guide on private cloud storage hardware and software.
Double-Take Flex for HPC delivers iSCSI SAN boot option for HPC Server
Double-Take Software Inc. last week deployed Double-Take Flex for High Performance Computing (HPC), a diskless iSCSI SAN boot product for Microsoft Windows HPC Server 2008 R2.
Double-Take's product manager for the Flex product, Steve Marfisi, refers to Flex for HPC as a "one-vendor diskless boot solution for Microsoft HPC that uses the native PXE [Preboot eXecution Environment] boot capabilities on existing systems."
The solution's deployment arrives as Vision Solutions readies the completion of a $242 million acquisition of Double-Take. The agreement, made public in May, is likely to close in July, following the approval of Double-Take's shareholders.
See this tip on how to choose the right iSCSI initiator type.
Isilon updates scale-out NAS operating system with tiering, analytics
In the wake of the expansion of its storage tier selection in the past few months, scale-out NAS provider Isilon Systems Inc. has enhanced its operating system software to facilitate the management of data across various tiers, and has included advanced analytics as well.
The new features of Isilon's OneFS 6.0 include SmartPools and InsightIQ for the management and monitoring of data across every one of its NAS systems. Isilon's software enhancement arrives after a succession of hardware upgrades over the last year or so, as it launched systems for transactional data and archiving in 2009 and solid-state drive (SSD) systems in February. Isilon's strategy is to expand its NAS customer pool from high-performance markets, including broadcasting and genomics, to more conventional NAS applications.
SmartPools enables users to concurrently use multiple Isilon IQ storage nodes in one file system, volume and namespace. This allows customers to set up and run a storage tier across several levels of hardware performance.
InsightIQ is launched as a virtual appliance for VMware and offers performance and file-system analytics.
Read Eric Slack's take on why clustered NAS is a requirement for VARs in 2010.
Zmanda updates Zmanda Cloud Backup with DR capabilities
Zmanda Inc. last week launched the third version of its Zmanda Cloud Backup (ZCB), updating it with cloud disaster recovery, Microsoft Server 2010 support and bandwidth throttling to the online data backup service for SMBs.
Zmanda Cloud Backup 3 is also updated with support for Amazon S3 data centers in Singapore, and Zmanda reduced price rates for the service that safeguards desktops and servers. ZCB is founded on Zmanda's open-source backup software.
SMBs currently use ZCB 3 to operate production servers with cloud-based online backup data for disaster recovery in the event that the primary data location becomes unavailable.
Read this tip on disaster recovery testing for SMBs vs. the enterprise.
Sepaton updates VTLs with storage pooling and readies for expansion
Sepaton Inc. recently included storage pooling and enhanced monitoring and reporting capabilities for its S2100-ES2 virtual tape libraries (VTLs) while priming for the extension of the platform from past just a VTL to support Ethernet and file-based interfaces.
The S2100-ES2 is currently founded on the Hitachi Data Systems AMS 2100 storage platform. Additionally, the recent release of the operating system supports content-aware monitoring and reporting, and the trackable erasure of VTL cartridges.
Linda Mentzer, director of product and program management at Sepaton, said the company shifting its data backup systems closer to a "unified secondary storage infrastructure." Sepaton, she said, has plans to add a network-attached storage (NAS) interface and support for 10 Gigabit Ethernet, iSCSI and Fibre Channel over Ethernet (FCoE).
Read the full story on Sepaton's addition of storage pooling to its VTLs.
Can 50 TB tape cartridges enhance data archiving?
Innovations in tape technology will amplify its density and capacity while rendering it increasingly searchable, prolonging tape's usefulness for data archiving rather than its standard function as a data backup solution.
This past May, Hitachi Maxell Ltd. and the Tokyo Institute of Technology claimed they have made available a new high-capacity tape media with the use of ultra-thin nano-structured magnetic film. The data tape cartridge consists of an areal density of 45 GB per square inch and offers greater than 50 TB of capacity per standard backup tape cartridge. This is 33 times greater than the capacity of existing LTO-5 tapes.
This top density was reached as a result of a new technique known as the facing targets sputtering method. Magnetron sputtering methods are presently utilized to make LTO tape and cannot be employed for fine composite films.
Read the full story on whether 50 TB tape cartridges can enhance data archiving.
LabTech releases disk-based backup and recovery tool
LabTech Software, maker of remote monitoring, management and automation solutions, has announced LT Backup, an integrated backup and disaster recovery tool for managed service providers (MSPs). LT Backup, which is sold as a hosted solution, comes with daily automated recoverability tests for backup restorations.
The company said the new tool captures snapshots of the entire system and restores the images to any computer or virtual machine. It performs a complete bare-metal recovery and server migration, including from physical servers to virtual servers. LabTech software can set up, schedule, execute and delete backup jobs and installs with wizard-based tools.
Additional storage news
There are Comments. Add yours.
|
http://searchitchannel.techtarget.com/news/1516019/HDS-two-vendors-boost-their-cloud-storage-Double-Take-Flex-for-HPC-delivers-iSCSI-SAN-boot-option
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Chapter 7. GProf Profile Plugin Support
Table of Contents
- 7.1. GProf Profile Plugin Installation
- 7.2. The eCosPro Runtime profile statistics package
- 7.3. Enabling profiling data generation within eCos and the eCosPro application
- 7.4. Enabling and Disabling profiling data collection
- 7.5. Extracting and Clearing the profiling data from the target
- 7.6. Display the profiling data using the GProf plugin
The Linux Tools GProf plugin brings the profiling capabilities of the GNU profiler, gprof, to Eclipse, in a manner that is easy to use by developers with different levels of experience. eCos and eCosPro have the ability to generate the data used in the generation of gprof profile timing and call graphs (for additional information, please refer to the Profiling section of the eCos and eCosPro Reference Manual). The eCosPro CDT plug-in includes functionality to extract the data from an eCosPro application that is running, or has been halted on the target hardware, through the Eclipse GUI and provides as input this data to the Linux Tools GProf plugin.
This section describes how to bring all the above functionality together to permit you to graphically view and explore the target hardware application's profile and timing data on your development host. This will allow you to analyse and explore your code within the Eclipse GUI to determine not only the parts of your application which are slower in execution than expected, but also can help you find many other statistics through which potential bugs can discovered and resolved. This section also provides a walk-through of the installation of the Linux Tools GProf plugin, the configuration of eCosPro and creation and compilation of the eCosPro C/C++ Application Project such that profiling data is generated, through to the capture and display of the profiling data.
7.1. GProf Profile Plugin Installation
The current Eclipse distribution provided with the eCosPro Developer's Kit version 4.x and above includes the Linux Tools GProf plugin pre-installed. If you are using an earlier version of Eclipse or a distribution from a different source, you can follow the instructions in this section to install the Linux Tools GProf plugin.
7.1.1. Check if Linux Tools GProf plugin is installed
To check whether the Linux Tools GProf plugin
is installed, within Eclipse select from the menu
→
and press the button. Within
the resulting dialog illustrated in
Figure 7.1, “Eclipse Installation Details - GProf” look in the
Configuration contents: section for
“
GProf Integration”. If present, the plugin
is installed.
Figure 7.1. Eclipse Installation Details - GProf
7.1.2. Install Linux Tools GProf plugin
To install the Linux Tools GProf plugin
is installed you will need internet access. Within Eclipse select from the menu
→
and within the Work with: field of the resulting
dialog, enter the name of your Eclipse Installation and choose the
download URL. An example illustration of the dialog is shown in
Figure 7.2, “Eclipse Install GProf”.
For example,
“
2018-09 -
Figure 7.2. Eclipse Install GProf
Select I accept the terms of the license agreement. Select the button to confirm agreement and begin the installation process. On completion of the installation you will be required to restart Eclipse.→ and ensure the latter is checked. Select the button to confirm the install details followed by to confirm your acceptance of the license terms by selecting
7.2. The eCosPro Runtime profile statistics package
Runtime support is required from the operating system in order to create
and store the profiling data on the target platform and may easily be
added by including the
Application profile support
package, also known as
CYGPKG_PROFILE_GPROF, into your
eCos configuration. This may be achieved through one of two methods:
7.2.1. Adding CYGPKG_PROFILE_GPROF with the eCos Configuration Tool
Open up your eCos configuration within the eCos Configuration
Tool and select
→ (Ctrl+P) and type
gprof into the
Keywords field as illustrated in
Figure 7.3, “Configuration Tool Install GProf Package”.
Figure 7.3. Configuration Tool Install GProf Package
If already installed, the package
Application profile
support will appear in the right-hand column under
Use these packages. If not installed, it will
appear under the Available Packages column. In
this case select the button to move the
package to the right-hand column followed by selecting the
button to accept the addition of the
package. Finally select
→ (Ctrl+S) to save your configuration followed optionally by
→ (F7) to rebuild the eCos library. The latter step is
optional as Eclipse will rebuild the library as soon as you exit
the eCos Configuration Tool if it
detects a change in the active configuration file.
7.2.2. Adding CYGPKG_PROFILE_GPROF with the command line
Open a command shell with the appropriate environment variables
set and, assuming your eCos configuration is named
ecos.ecc, the commands illustrated in
Example 7.1, “Adding CYGPKG_PROFILE_GPROF with the command line” will add the
CYGPKG_PROFILE_GPROF package and rebuild the
eCos library.
Example 7.1. Adding CYGPKG_PROFILE_GPROF with the command line
$
ecosconfig add CYGPKG_PROFILE_GPROF$
ecosconfig tree$
make
7.2.3. Enabling TFTP support for profiling data extraction
Normally the capture of profiling data from the target platform requires the temporary suspension of all code execution on the target platform while the data is extracted, either through the use of a GDB monitor or a hardware debugger.
However, if your eCos Configuration includes the FredBSD networking
stack (the
CYGPKG_NET_FREEBSD_STACK package) with the
TFTP server option enabled (the default), the profiling data may be
extracted from the target platform without temporarily suspending all
code execution. This is achieved through the use of an additional low-
priority eCos thread that provides a TFTP service (on port 69, the
default) which allows the transfer of the profiling data from the
target hardware to occur over TFTP. The eCos
CYGPKG_PROFILE_GPROF package by default creates this
thread when the
CYGPKG_NET_FREEBSD_STACK package is
enabled.
When capturing profiling data in this manner, the eCosPro CDT plug-in needs to be configured where the profiling data will be captured from, otherwise the default method through GDB is used. Open the debug launch configuration window you previously created to launch the binary in Section 4.1, “eCos Launch Configurations” and select the Profiling tab. This dialog is illustrated in Figure 7.4, “Profiling Data Capture through TFTP”.
Figure 7.4. Profiling Data Capture through TFTP
Check the Fetch data via TFTP instad of GDB (requires TFTP
server on target) field and fill in appropriate values for
the Hostname / IP address for the target
platform's network address and Port number. The
default port number set by the
CYGPKG_NET_FREEBSD_STACK package for the TFTP server
is
69.
7.3. Enabling profiling data generation within eCos and the eCosPro application
The generation of profiling data does not happen automatically when the
CYGPKG_NET_FREEBSD_STACK package is added to eCos.
The inclusion of this package within your eCos configuration just
enables runtime support. Code has to be compiled with the
-pg GNU compiler flag to make it capable of generating
profiling data.
7.3.1. Compiling the application with the
-pg GNU compiler flag
You may enable this flag project-wide by bringing up the project's properties dialog as described in Section 5.3, “Application Project Properties”, and selecting → , selecting the Tool Settings tab in the right-hand panel and within tree of settings selecting either → or → whether you wish profiling data to be generated for either C or C++ code, or both. Check the Generate gprof information (-pg) field from the right-hand panel, apply the changes and close the dialog by pressing the and buttons respectively. This is illustrated in Figure 7.5, “Enable Application Profiling Data Generation”.
Figure 7.5. Enable Application Profiling Data Generation
The
-pg GNU compiler flag may also be enabled or
disabled for individual source files by opening the properties dialog
for each source file (similar to opening the application project's
properties, but start by highlighting the source file within the
window of the C/C++ perspective)
and checking or unchecking the Generate gprof information
(-pg) field.
7.3.2.
Compiling eCos with the
-pg GNU compiler flag
eCos and eCosPro functions may also be included for profiling
data generation and analysis. This may be acheived by adding the
-pg GNU compiler flag to the value of
CYGBLD_GLOBAL_CFLAGS configuration macro.
To add or remove the flag, open the eCos configuration in the
eCos Configuraion Tool as described in Section 6.2.1, “Editing the eCos configuration project”, and search using the
Find in configuration dialog,
reached through the menu options
→ (Ctrl+F) for
CYGBLD_GLOBAL_CFLAGS
(as Find what) with
Search in set to
Macro
Names. Once visible,
mouse click in the value field (or
click to bring up a
String Edit dialog) and add or remove
the
-pg flag to/from the value as required.
Finally select Ctrl+S) to save your configuration followed optionally by → (F7) to rebuild the eCos library.→ (
7.4. Enabling and Disabling profiling data collection
Enabling and disabling the collection of profiling data is done
programatically by the application through the following two
functions:
profile_on and
profile_off.
7.4.1. Enable profiling data collection
The application must call the function
profile_on
to start the collection of profiling data. If the TFTP daemon is
enabled, the call to
profile_on must happen once
the network is up and running, typically after the call to
init_all_network_interfaces. This is because the
TFTP daemon will be started within
profile_on.
A typical example is illustrated in Example 7.2, “Enable profiling data collection”.
Example 7.2. Enable profiling data collection
#include <pkgconf/system.h> #ifdef CYGPKG_NET # include <network.h> #endif #ifdef CYGPKG_PROFILE_GPROF # include <cyg/profile/profile.h> #endif … int main(int argc, char** argv) { … #ifdef CYGPKG_NET init_all_network_interfaces(); #endif … #ifdef CYGPKG_PROFILE_GPROF { extern char _stext[], _etext[]; profile_on(_stext, _etext, 16, 3500); } #endif … }
The
profile_on function takes four arguments:
start address,
end address
These two arguments specify a range of addresses that are to be profiled in a contiguous section of memory. The eCos linker script export the symbols
_stextand
_etexton most targets and these correspond to the beginning and end of code. Profiling may be performed on a subset of code by specifying the start and end addresses of the code region on which profiling is to be performed.
bucket size
This is the bucket size which the
profile_ondivides the range of addresses into. It dynamically.
time interval
This specifies, in units of microseconds, the interval between profile timer interrupts. Increasing this value gives more accurate profiling results but will result in higher run-time overheads and a greater risk of a counter overflow. This value may be modified by the implementation because of hardware restrictions, so as a result the generated profile data contains the actual interval used.
7.4.2. Disabling profiling data collection
The collection of profiling data may be disabled by the
application, using a call to
profile_off.
This will also reset any existing profile data.
The function prototype is illustrated in
Example 7.3, “profile_off prototype”.
Example 7.3. profile_off prototype
void profile_off(void);
7.5. Extracting and Clearing the profiling data from the target
Extracting and clearing the profiling data from the target platform is a
simple operation with eCosPro and the eCosPro CDT plug-in. When eCosPro is
configured and built as described in
Section 7.2.1, “Adding CYGPKG_PROFILE_GPROF with the eCos Configuration Tool”, the series of macros
required to extract and clear the profiling data from the target
platform eCos are installed in a file within the
${eCosInstallDir}/etc directory. When
required, these macros are loaded by the eCosPro CDT plug-in into
GDB and executed.
Also ensure that you have included the code described in Section 7.4.1, “Enable profiling data collection” into your application, configured the launch profile for TFTP if required as described in Section 7.2.3, “Enabling TFTP support for profiling data extraction”, and start executing and debugging your application in the normal fashion.
Figure 7.6. Capture and Clear Profiling Data
7.5.1. Extracting the profiling data from the target
To capture the profiling data, select the application's process or one of the application's threads within the Debug Window of the Debug Perspective, and press the Take Profile Snapshot button called out as 1 in Figure 7.6, “Capture and Clear Profiling Data”.
This will extract the profiling data from the target, pausing and
resuming execution if necessary, and save it to a file within the
project explorer tree in the same directory as the binary executable.
The file will have a
.gmon extension with the
date and time of the snapshot as the base name, allowing multiple
snapshots to be taken and saved at different times.
7.5.2. Clearing the profiling data from the target
To clear the profiling data, press the Reset Profile Data button called out as 2 in Figure 7.6, “Capture and Clear Profiling Data” when either the application's process or one of the application's threads within the Debug Window of the Debug Perspective is selected. Similar to the capturing of the data, this will pausie and resume execution of the application if necessary.
7.6. Display the profiling data using the GProf plugin
To display the profiling data in the GProf
plugin,
on the
.gmon filename corresponding to the data capture
you would like to display, or
on the filename and select or
→ .
If you have multiple binaries, the popup illustrated in Figure 7.7, “Gmon File Viewer: select binary” will appear prompting you to
select the binary used to generate the profiling data. If prompted,
select the application binary used to generate the profile data.
Figure 7.7. Gmon File Viewer: select binary
Select the gprof tab as illustrated in Figure 7.8, “Gprof tab window”.
Figure 7.8. Gprof tab window
The Gprof view shows how much execution time is consumed by each part of the application and also provides call graph infomation for each function. The buttons available are:
“Show/Hide columns” button allows you to select which columns to display.
“Export to CSV” button allows you to export the GProv result as a CSV text file.
“Sorting” button allows you to choose the columns, their priority and their ordering by which the data is sorted.
“Sort samples per file” button displays the GProf result sorted by file.
“Sort samples per function” button displays the GProf result sorted by function.
“Sort samples per line” button displays the GProf result sorted by line.
“Display function call graph” button displays the GProf result as a call graph.
“Switch sample/time” button allows you to switch the data between sample and time results.
“Create chart” button allows you to create a chart (Bar/Vertical bar/Pie) from the data selected for the data of selected columns.
For further documentation, please refer to the GProf User Guide.
|
https://doc.ecoscentric.com/cdt-guide/ch-gprof.html
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Tools for creating and verifying consumer driven contracts using the Pact framework.
Project description
pactman
Python version of Pact mocking, generation and verification.
Enables consumer driven contract testing, providing unit test mocking of provider services and DSL for the consumer project, and interaction playback and verification for the service provider project. Currently supports versions 1.1, 2 and 3 of the Pact specification.
For more information about what Pact is, and how it can help you test your code more efficiently, check out the Pact documentation.
Contains code originally from the pact-python project.
pactman is maintained by the ReeceTech team as part of their toolkit to keep their large (and growing) microservices architecture under control.
- How to use pactman
- Development
pactman vs pact-python
The key difference is all functionality is implemented in Python, rather than shelling out or forking to the ruby implementation. This allows for a much nicer mocking user experience (it mocks urllib3 directly), is faster, less messy configuration (multiple providers means multiple ruby processes spawned on different ports).
Where
pact-python required management of a background Ruby server, and manually starting and stopping
it,
pactman allows a much nicer usage like:
import requests from pactman import Consumer, Provider pact = Consumer('Consumer').has_pact_with(Provider('Provider')) def test_interaction(): pact.given("some data exists").upon_receiving("a request") \ .with_request("get", "/", query={"foo": ["bar"]}).will_respond_with(200) with pact: requests.get(pact.uri, params={"foo": ["bar"]})
It also supports a broader set of the pact specification (versions 1.1 through to 3).
The pact verifier has been engineered from the start to talk to a pact broker (both to discover pacts and to return verification results).
There’s a few other quality of life improvements, but those are the big ones.
How to use pactman
Installation
pactman requires Python 3.6 to run.
pip install pactman
Writing a Pact
Creating a complete contract is a two step process:
- Create a unit test on the consumer side that declares the expectations it has of the provider
- Create a provider state that allows the contract to pass when replayed against the provider
Writing the Consumer Test
If we have a method that communicates with one of our external services, which we'll call
Provider, and our product,
Consumer is hitting an endpoint on
Provider at
/users/<user> to get information about a particular user.
If the
Consumer's code to fetch a user looked like this:
import requests def get_user(user_name): response = requests.get(f' return response.json()
Then
Consumer's contract test is a regular unit test, but using pactman for mocking,
and might look something like this:
import unittest from pactman import Consumer, Provider pact = Consumer('Consumer').has_pact_with(Provider('Provider')) class GetUserInfoContract(unittest.TestCase): def test_get_user(self): expected = { 'username': 'UserA', 'id': 123, 'groups': ['Editors'] } pact.given( 'UserA exists and is not an administrator' ).upon_receiving( 'a request for UserA' ).with_request( 'GET', '/users/UserA' ) .will_respond_with(200, body=expected) with pact: result = get_user('UserA') self.assertEqual(result, expected)
This does a few important things:
- Defines the Consumer and Provider objects that describe our product and our service under test
- Uses
givento define the setup criteria for the Provider
UserA exists and is not an administrator
- Defines what the request that is expected to be made by the consumer will contain
- Defines how the server is expected to respond
Using the Pact object as a context manager, we call our method under test which will then communicate with the Pact mock. The mock will respond with the items we defined, allowing us to assert that the method processed the response and returned the expected value.
If you want more control over when the mock is configured and the interactions verified,
use the
setup and
verify methods, respectively:
Consumer('Consumer').has_pact_with(Provider('Provider')).given( 'UserA exists and is not an administrator' ).upon_receiving( 'a request for UserA' ).with_request( 'GET', '/users/UserA' ) .will_respond_with(200, body=expected) pact.setup() try: # Some additional steps before running the code under test result = get_user('UserA') # Some additional steps before verifying all interactions have occurred finally: pact.verify()
An important note about pact relationship definition
You may have noticed that the pact relationship is defined at the module level in our examples:
pact = Consumer('Consumer').has_pact_with(Provider('Provider'))
This is because it must only be done once per test suite. By default the pact file is cleared out when that relationship is defined, so if you define it more than once per test suite you'll end up only storing the last pact declared per relationship. For more on this subject, see writing multiple pacts.
Requests
When defining the expected HTTP request that your code is expected to make you can specify the method, path, body, headers, and query:
pact.with_request( method='GET', path='/api/v1/my-resources/', query={'search': 'example'} )
query is used to specify URL query parameters, so the above example expects
a request made to
/api/v1/my-resources/?search=example.
pact.with_request( method='POST', path='/api/v1/my-resources/123', body={'user_ids': [1, 2, 3]}, headers={'Content-Type': 'application/json'}, )
You can define exact values for your expected request like the examples above, or you can use the matchers defined later to assist in handling values that are variable.
Some important has_pact_with() options()
The
has_pact_with(provider...) call has quite a few options documented in its API, but a couple are
worth mentioning in particular:
version declares the pact specification version that the provider supports. This defaults to "2.0.0", but "3.0.0"
is also acceptable if your provider supports Pact specification version 3:
from pactman import Consumer, Provider pact = Consumer('Consumer').has_pact_with(Provider('Provider'), version='3.0.0')
file_write_mode defaults to
"overwrite" and should be that or
"merge". Overwrite ensures
that any existing pact file will be removed when
has_pact_with() is invoked. Merge will retain
the pact file and add new pacts to that file. See writing multiple pacts.
If you absolutely do not want pact files to be written, use
"never".
use_mocking_server defaults to
False and controls the mocking method used by
pactman. The default is to
patch
urllib3, which is the library underpinning
requests and is also used by some other projects. If you
are using a different library to make your HTTP requests which does not use
urllib3 underneath then you will need
to set the
use_mocking_server argument to
True. This causes
pactman to run an actual HTTP server to mock the
requests (the server is listening on
pact.uri - use that to redirect your HTTP requests to the mock server.) You
may also set the
PACT_USE_MOCKING_SERVER environment variable to "yes" to force your entire suite to use the server
approach. You should declare the pact particpants (consumer and provider) outside of your tests and will need
to start and stop the mocking service outside of your tests too. The code below shows what using the server might
look like:
import atexit from pactman import Consumer, Provider pact = Consumer('Consumer').has_pact_with(Provider('Provider'), use_mocking_server=True) pact.start_mocking() atexit.register(pact.stop_mocking)
You'd then use
pact to declare pacts between those participants.
Writing multiple pacts
During a test run you're likely to need to write multiple pact interactions for a consumer/provider
relationship.
pactman will manage the pact file as follows:
- When
has_pact_with()is invoked it will by default remove any existing pact JSON file for the stated consumer & provider.
- You may invoke
Consumer('Consumer').has_pact_with(Provider('Provider'))once at the start of your tests. This could be done as a pytest module or session fixture, or through some other mechanism and store it in a variable. By convention this is called
pactin all of our examples.
- If that is not suitable, you may manually indicate to
has_pact_with()that it should either retain (
file_write_mode="merge") or remove (
file_write_mode="overwrite") the existing pact file.
Some words about given()
You use
given() to indicate to the provider that they should have some state in order to
be able to satisfy the interaction. You should agree upon the state and its specification
in discussion with the provider.
If you are defining a version 3 pact you may define provider states more richly, for example:
(pact .given("this is a simple state as in v2") .and_given("also the user must exist", username="alex") )
Now you may specify additional parameters to accompany your provider state text. These are
passed as keyword arguments, and they're optional. You may also provider additional provider
states using the
and_given() call, which may be invoked many times if necessary. It and
given() have the same calling convention: a provider state name and any optional parameters.
Expecting Variable Content
The default validity testing of equal values works great if that user information is always static, but what happens if the user has a last updated field that is set to the current time every time the object is modified? To handle variable data and make your tests more robust, there are several helpful matchers:
Includes(matcher, sample_data)
Available in version 3.0.0+ pacts
Asserts that the value should contain the given substring, for example::
from pactman import Includes, Like Like({ 'id': 123, # match integer, value varies 'content': Includes('spam', 'Sample spamming content') # content must contain the string "spam" })
The
matcher and
sample_data are used differently by consumer and provider depending
upon whether they're used in the
with_request() or
will_respond_with() sections
of the pact. Using the above example:
Includes in request
When you run the tests for the consumer, the mock will verify that the data
the consumer uses in its request contains the
matcher string, raising an AssertionError
if invalid. When the contract is verified by the provider, the
sample_data will be
used in the request to the real provider service, in this case
'Sample spamming content'.
Includes in response
When you run the tests for the consumer, the mock will return the data you provided
as
sample_data, in this case
'Sample spamming content'. When the contract is verified on the
provider, the data returned from the real provider service will be verified to ensure it
contains the
matcher string.
Term(matcher, sample_data)
Asserts the value should match the given regular expression. You could use this to expect a timestamp with a particular format in the request or response where you know you need a particular format, but are unconcerned about the exact date:
from pactman import Term (pact .given('UserA exists and is not an administrator') .upon_receiving('a request for UserA') .with_request( 'post', '/users/UserA/info', body={'commencement_date': Term('\d+-\d+-\d', '1972-01-01')}) .will_respond_with(200, body={ 'username': 'UserA', 'last_modified': Term('\d+-\d+-\d+T\d+:\d+:\d+', '2016-12-15T20:16:01') }))
The
matcher and
sample_data are used differently by consumer and provider depending
upon whether they're used in the
with_request() or
will_respond_with() sections
of the pact. Using the above example:
Term in request
When you run the tests for the consumer, the mock will verify that the
commencement_date
the consumer uses in its request matches the
matcher, raising an AssertionError
if invalid. When the contract is verified by the provider, the
sample_data will be
used in the request to the real provider service, in this case
1972-01-01.
Term in response
When you run the tests for the consumer, the mock will return the
last_modified you provided
as
sample_data, in this case
2016-12-15T20:16:01. When the contract is verified on the
provider, the regex will be used to search the response from the real provider service
and the test will be considered successful if the regex finds a match in the response.
Like(sample_data)
Asserts the element's type matches the
sample_data. For example:
from pactman import Like Like(123) # Matches if the value is an integer Like('hello world') # Matches if the value is a string Like(3.14) # Matches if the value is a float
Like in request
When you run the tests for the consumer, the mock will verify that values are
of the correct type, raising an AssertionError if invalid. When the contract is
verified by the provider, the
sample_data will be used in the request to the
real provider service.
Like in response
When you run the tests for the consumer, the mock will return the
sample_data.
When the contract is verified on the provider, the values generated by the provider
service will be checked to match the type of
sample_data.
Applying Like to complex data structures
When a dictionary is used as an argument for Like, all the child objects (and their child objects etc.) will be matched according to their types, unless you use a more specific matcher like a Term.
from pactman import Like, Term Like({ 'username': Term('[a-zA-Z]+', 'username'), 'id': 123, # integer 'confirmed': False, # boolean 'address': { # dictionary 'street': '200 Bourke St' # string } })
EachLike(sample_data, minimum=1)
Asserts the value is an array type that consists of elements
like
sample_data. It can be used to assert simple arrays:
from pactman import EachLike EachLike(1) # All items are integers EachLike('hello') # All items are strings
Or other matchers can be nested inside to assert more complex objects:
from pactman import EachLike, Term EachLike({ 'username': Term('[a-zA-Z]+', 'username'), 'id': 123, 'groups': EachLike('administrators') })
Note, you do not need to specify everything that will be returned from the Provider in a JSON response, any extra data that is received will be ignored and the tests will still pass.
For more information see Matching
Enforcing equality matching with Equals
Available in version 3.0.0+ pacts
If you have a sub-term of a
Like which needs to match an exact value like the default
validity test then you can use
Equals, for example::
from pactman import Equals, Like Like({ 'id': 123, # match integer, value varies 'username': Equals('alex') # username must always be "alex" })
Body payload rules
The
body payload is assumed to be JSON data. In the absence of a
Content-Type header
we assume
Content-Type: application/json; charset=UTF-8 (JSON text is Unicode and the
default encoding is UTF-8).
During verification non-JSON payloads are compared for equality.
During mocking, the HTTP response will be handled as:
- If there's no
Content-Typeheader, assume JSON: serialise with
json.dumps(), encode to UTF-8 and add the header
Content-Type: application/json; charset=UTF-8.
- If there's a
Content-Typeheader and it says
application/jsonthen serialise with json.dumps() and use the charset in the header, defaulting to UTF-8.
- Otherwise pass through the
Content-Typeheader and body as-is. Binary data is not supported.
Verifying Pacts Against a Service
You have two options for verifying pacts against a service you created:
- Use the
pactman-verifiercommand-line program which replays the pact assertions against a running instance of your service, or
- Use the
pytestsupport built into pactman to replay the pacts as test cases, allowing use of other testing mechanisms such as mocking and transaction control.
Using
pactman-verifier
Run
pactman-verifier -h to see the options available. To run all pacts registered to a provider in a Pact Broker:
pactman-verifier -b <provider name> <provider url> <provider setup url>
You can pass in a local pact file with
-l, this will verify the service against the local file instead of the broker:
pactman-verifier -l /tmp/localpact.json <provider name> <provider url> <provider setup url>
You can use
--custom-provider-header to pass in headers to be passed to provider state setup and verify calls. it can
be used multiple times
pactman-verifier -b <broker url> --custom-provider-header "someheader:value" --custom-provider-header "this:that" <provider name> <provider url> <provider state url>
An additional header may also be supplied in the
PROVIDER_EXTRA_HEADER environment variable, though the command
line argument(s) would override this.
Provider States
In many cases, your contracts will need very specific data to exist on the provider to pass successfully. If you are fetching a user profile, that user needs to exist, if querying a list of records, one or more records needs to exist. To support decoupling the testing of the consumer and provider, Pact offers the idea of provider states to communicate from the consumer what data should exist on the provider.
When setting up the testing of a provider you will also need to setup the management of
these provider states. The Pact verifier does this by making additional HTTP requests to
the
<provider setup url> you provide. This URL could be
on the provider application or a separate one. Some strategies for managing state include:
- Having endpoints in your application that are not active in production that create and delete your datastore state
- A separate application that has access to the same datastore to create and delete, like a separate App Engine module or Docker container pointing to the same datastore
- A standalone application that can start and stop the other server with different datastore states
For more information about provider states, refer to the Pact documentation on Provider States.
Verifying Pacts Using
pytest
To verify pacts for a provider you would write a new pytest test module in the provider's test suite.
If you don't want it to be exercised in your usual unit test run you can call it
verify_pacts.py.
Your test code needs to use the
pact_verifier fixture provided by pactman, invoking
its
verify() method with the URL to the running instance of your service (
pytest-django provides
a handy
live_server fixture which works well here) and a callback to set up provider states (described
below).
You'll need to include some extra command-line arguments to pytest (also described below) to indicate where the pacts should come from, and whether verification results should be posted to a pact broker.
An example for a Django project might contain:
from django.contrib.auth.models import User from pactman.verifier.verify import ProviderStateMissing def provider_state(name, **params): if name == 'the user "pat" exists': User.objects.create(username='pat', fullname=params['fullname']) else: raise ProviderStateMissing(name) def test_pacts(live_server, pact_verifier): pact_verifier.verify(live_server.url, provider_state)
The
pact_verifier.verify call may also take a third argument to supply additional HTTP headers
to send to the server during verification - specify them as a dictionary.
The test function may do any level of mocking and data setup using standard pytest fixtures - so mocking downstream APIs or other interactions within the provider may be done with standard monkeypatching.
Provider states using
pytest
The
provider_state function passed to
pact_verifier.verify will be passed the
providerState and
providerStates for all pacts being verified.
- For pacts with providerState the
nameargument will be the
providerStatevalue, and
paramswill be empty.
- For pacts with providerStates the function will be invoked once per entry in
providerStatesarray with the
nameargument taken from the array entry
nameparameter, and
paramsfrom the
paramsparameter.
Command line options to control
pytest verifying pacts
Once you have written the pytest code, you need to invoke pytest with additional arguments:
--pact-broker-url=<URL> provides the base URL of the Pact broker to retrieve pacts from for the
provider. You must also provide
--pact-provider-name=<ProviderName> to identify which provider to
retrieve pacts for from the broker.
The broker URL and provider name may alternatively be provided through the environment variables
PACT_BROKER_URL and
PACT_PROVIDER_NAME.
You may provide
--pact-verify-consumer=<ConsumerName> to limit
the pacts verified to just that consumer. As with the command-line verifier, you may provide basic
auth details in the broker URL, or through the
PACT_BROKER_AUTH environment variable. If your broker
requires a bearer token you may provide it with
--pact-broker-token=<TOKEN> or the
PACT_BROKER_TOKEN
environment variable.
--pact-files=<file pattern> verifies some on-disk pact JSON files identified by the wildcard pattern
(unix glob pattern matching, use
**to match multiple directories).
If you pulled the pacts from a broker and wish to publish verification results, use
--pact-publish-results
to turn on publishing the results. This option also requires you to specify
--pact-provider-version=<version>.
So, for example:
# verify some local pacts in /tmp/pacts $ pytest --pact-files=/tmp/pacts/*.json tests/verify_pacts.py # verify some pacts in a broker for the provider MyService $ pytest --pact-broker-url= --pact-provider-name=MyService tests/verify_pacts.py
If you need to see the traceback that caused a pact failure you can use the verbosity flag
to pytest (
pytest -v).
See the "pact" section in the pytest command-line help (
pytest -h) for all command-line options.
Pact Broker Configuration
You may also specify the broker URL in the environment variable
PACT_BROKER_URL.
If HTTP Basic Auth is required for the broker, that may be provided in the URL:
pactman-verifier -b ... pytest --pact-broker-url= ...
or set in the
PACT_BROKER_AUTH environment variable as
user:password.
If your broker needs a bearer token then you may provide that on the command line or set it in the
environment variable
PACT_BROKER_TOKEN.
Filtering Broker Pacts by Tag
If your consumer pacts have tags (called "consumer version tags" because they attach to specific versions) then you may specify the tag(s) to fetch pacts for on the command line. Multiple tags may be specified, and all pacts matching any tags specified will be verified. For example, to ensure you're verifying your Provider against the production pact versions from your Consumers, use:
pactman-verifier --consumer-version-tag=production -b ... pytest --pact-verify-consumer-tag=production --pact-broker-url= ...
Development
Please read CONTRIBUTING.md
Release History
3.0.0 (FUTURE, DEPRECATION WARNINGS)
- remove DEPRECATED
--pact-consumer-namecommand-line option
2.30.0
- DELETE requests may now have query strings, thanks @MazeDeveloper
- Nicer feedback if no pact source is specified on command line, thanks @artamonovkirill
- Add PACT_PROVIDER_NAME to environment vars, thanks @artamonovkirill
2.29.0
- Added support for
--pact-files, thanks @maksimt
2.28.0
- Fixed edge case where
fail()was not being invoked in an exact match causing the pytest reporter to not know there'd been a failure
- Address deprecation of
semver.parsein semver
- Dropped Python 3.6 testing, added Python 3.8 testing
2.27.0
- Fix typo in pytest plugin preventing
--pact-verify-consumer-tagfrom working
- Added PATCH support
2.26.0
- Allow pytest verification to specify
extra_provider_headers
2.25.0
- Add option to allow pytest to succeed even if a pact verification fails
2.24.0
- Better integration of pact failure information in pytest
2.23.0
- Enable setting of authentication credentials when connecting to the pact broker
- Allow filtering of pacts fetched from broker to be filtered by consumer version tag
- Improve the naming and organisation of the pytest command line options
2.22.0
- Better implementation of change in 2.21.0
2.21.0
- Handle warning level messages in command line output handler
2.20.0
- Fix pytest mode to correctly detect array element rule failure as a pytest failure
- Allow restricting pytest verification runs to a single consumer using --pact-consumer-name
2.19.0
- Correct teardown of pact context manager where the pact is used in multiple interactions (
with interaction1, interaction2instead of
with pact).
2.18.0
- Correct bug in cleanup that resulted in urllib mocking breaking.
2.17.0
- Handle absence of any provider state (!) in pytest setup.
2.16.0
- Delay shenanigans around checking pacts directory until pacts are actually written to allow module-level pact definition without side effects.
2.15.0
- Fix structure of serialisation for header matching rules.
- Add
"never"to the
file_write_modeoptions.
- Handle x-www-form-urlencoded POST request bodies.
2.14.0
- Improve verbose messages to clarify what they're saying.
2.13.0
- Add ability to supply additional headers to provider during verification (thanks @ryallsa)
2.12.1
- Fix pact-python Term compatibility
2.12.0
- Add
Equalsand
Includesmatchers for pact v3+
- Make verification fail if missing header specified in interaction
- Significantly improved support for pytest provider verification of pacts
- Turned pact state call failures into warnings rather than errors
2.11.0
- Ensure query param values are lists
2.10.0
- Allow
has_pact_with()to accept
file_write_mode
- Fix bug introduced in 2.9.0 where generating multiple pacts would result in a single pact being recorded
2.9.0
- Fix
with_requestwhen called with a dict query (thanks Cong)
- Make
start_mocking()and
stop_mocking()optional with non-server mocking
- Add shortcut so
python -m pactman.verifier.command_lineis just
python -m pactman(mostly used in testing before release)
- Handle the
Noneprovider state
- Ensure pact spec versions are consistent across all mocks used to generate a pact file
2.8.0
- Close up some edge cases in body content during mocking, and document in README
2.7.0
- Added
and_given()as a method of defining additonal provider states for v3+ pacts
- Added more tests for pact generation (serialisation) which fixed a few edge case bugs
- Fix handling of lower-case HTTP methods in verifier (thanks Cong!)
2.6.1
- Fix issue where mocked
urlopendidn't handle the correct number of positional arguments
2.6.0
- Fix several issues cause by a failure to detect failure in several test cases (header, path and array element rules may not have been applied)
- Fix rules applying to a single non-first element in an array
- Fix generation of consumer / provider name in <v3 pacts
2.5.0
- Fix some bugs around empty array verification
2.4.0
- Create the pact destination dir if it's missing and its parent exists
2.3.0
- Fix some issues around mocking request queries and the mock's verification of same
- Fix header regex matching in mock verification
- Actually use the version passed in to
has_pact_with()
- Fix some pact v3 generation issues (thanks pan Jacek)
2.2.0
- Reinstate lost result output.
2.1.0
- Corrected the definition of request payload when there is no
bodyin the request
2.0.0
- Correctly determine pact verification result when publishing to broker.
1.2.0
- Corrected use of format_path in command line error handling.
- Tweaked README for clarity.
1.1.0
- Renamed the
pact-verifiercommand to
pactman-verifierto avoid confusion with other pre-existing packages that provide a command-line incompatible
pact-verifiercommand.
- Support verification of HEAD requests (oops).
1.0.8
- Corrected project URL in project metadata (thanks Jonathan Moss)
- Fix verbose output
1.0.7
- Added some Trove classifiers to aid potential users.
1.0.6
- Corrected mis-named command-line option.
1.0.5
- Corrected some packaging issues
1.0.4
- Initial release of pactman, including ReeceTech's pact-verifier version 3.17 and pact-python version 0.17.0
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/pactman/
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
tensorflow v2.4.0-rc3 Release NotesRelease Date: 2020-11-24 // over 1 year ago
🚀 Release 2.4.0
Major Features and Improvements
tf.distributeintroduces experimental support for asynchronous training of Keras models via the
tf.distribute.experimental.ParameterServerStrategyAPI. Please see below for additional details.
MultiWorkerMirroredStrategyis now a stable API and is no longer considered experimental. Some of the major improvements involve handling peer failure and many bug fixes. Please check out the detailed tutorial on Multi-worker training with Keras.
📄 Introduces experimental support for a new module named
tf.experimental.numpywhich is a NumPy-compatible API for writing TF programs. See the detailed guide to learn more. Additional details below.
➕ Adds Support for
🔊 TensorFloat-32 on Ampere based GPUs. TensorFloat-32, or TF32 for short, is a math mode for NVIDIA Ampere based GPUs and is enabled by default.
🐎 A major refactoring of the internals of the Keras Functional API has been completed, that should improve the reliability, stability, and performance of constructing Functional models.
Keras mixed precision API
tf.keras.mixed_precisionis no longer experimental and allows the use of 16-bit floating point formats during training, improving performance by up to 3x on GPUs and 60% on TPUs. Please see below for additional details.
👷 TensorFlow Profiler now supports profiling
MultiWorkerMirroredStrategyand tracing multiple workers using the sampling mode API.
TFLite Profiler for Android is available. See the detailed guide to learn more.
📦 TensorFlow pip packages are now built with CUDA11 and cuDNN 8.0.2.
💥 Breaking Changes
TF Core:
- Certain float32 ops run in lower precsion on Ampere based GPUs, including matmuls and convolutions, due to the use of TensorFloat-32. Specifically, inputs to such ops are rounded from 23 bits of precision to 10
bits of precision. This is unlikely to cause issues in practice for deep learning models. In some cases, TensorFloat-32 is also used for complex64 ops.
TensorFloat-32 can be disabled by running
tf.config.experimental.enable_tensor_float_32_execution(False).
- The byte layout for string tensors across the C-API has been updated to match TF Core/C++; i.e., a contiguous array of
tensorflow::tstring/
TF_TStrings.
- C-API functions
TF_StringDecode,
TF_StringEncode, and
TF_StringEncodedSizeare no longer relevant and have been removed; see
core/platform/ctstring.hfor string access/modification in C.
tensorflow.python,
tensorflow.coreand
tensorflow.compilermodules are now hidden. These modules are not part of TensorFlow public API.
tf.raw_ops.Maxand
tf.raw_ops.Minno longer accept inputs of type
tf.complex64or
tf.complex128, because the behavior of these ops is not well defined for complex types.
- XLA:CPU and XLA:GPU devices are no longer registered by default. Use
TF_XLA_FLAGS=--tf_xla_enable_xla_devicesif you really need them, but this flag will eventually be removed in subsequent releases.
tf.keras:
- The
steps_per_executionargument in
compile()is no longer experimental; if you were passing
experimental_steps_per_execution, rename it to
steps_per_executionin your code. This argument controls the number of batches to run during each
tf.functioncall when calling
fit(). Running multiple batches inside a single
tf.functioncall can greatly improve performance on TPUs or small models with a large Python overhead.
- A major refactoring of the internals of the Keras Functional API may affect code that
is relying on certain internal details:
- Code that uses
isinstance(x, tf.Tensor)instead of
tf.is_tensorwhen checking Keras symbolic inputs/outputs should switch to using
tf.is_tensor.
- Code that is overly dependent on the exact names attached to symbolic tensors (e.g. assumes there will be ":0" at the end of the inputs, treats names as unique identifiers instead of using
tensor.ref(), etc.)
- Code that uses
get_concrete_functionto trace Keras symbolic inputs directly should switch to building matching
tf.TensorSpecs directly and tracing the
TensorSpecobjects.
- Code that relies on the exact number and names of the op layers that TensorFlow operations were converted into may have changed.
- Code that uses
tf.map_fn/
tf.cond/
tf.while_loop/control flow as op layers and happens to work before TF 2.4. These will explicitly be unsupported now. Converting these ops to Functional API op layers was unreliable before TF 2.4, and prone to erroring incomprehensibly or being silently buggy.
- Code that directly asserts on a Keras symbolic value in cases where ops like
tf.rankused to return a static or symbolic value depending on if the input had a fully static shape or not. Now these ops always return symbolic values.
- Code already susceptible to leaking tensors outside of graphs becomes slightly more likely to do so now.
- Code that tries directly getting gradients with respect to symbolic Keras inputs/outputs. Use GradientTape on the actual Tensors passed to the already- constructed model instead.
- Code that requires very tricky shape manipulation via converted op layers in order to work, where the Keras symbolic shape inference proves insufficient.
- Code that tries manually walking a
tf.keras.Modellayer by layer and assumes layers only ever have one positional argument. This assumption doesn't hold true before TF 2.4 either, but is more likely to cause issues now.
- Code that manually enters
keras.backend.get_graph()before building a functional model is no longer needed.
- Start enforcing input shape assumptions when calling Functional API Keras models. This may potentially break some users, in case there is a mismatch between the shape used when creating
Inputobjects in a Functional model, and the shape of the data passed to that model. You can fix this mismatch by either calling the model with correctly-shaped data, or by relaxing
Inputshape assumptions (note that you can pass shapes with
Noneentries for axes
that are meant to be dynamic). You can also disable the input checking entirely by setting
model.input_spec = None.
- Serveral changes have been made to
tf.keras.mixed_precision.experimental. Note that it is now recommended to use the non-experimental
tf.keras.mixed_precisionAPI.
AutoCastVariable.dtypenow refers to the actual variable dtype, not the dtype it will be casted to.
- When mixed precision is enabled,
tf.keras.layers.Embeddingnow outputs a float16 or bfloat16 tensor instead of a float32 tensor.
- The property
tf.keras.mixed_precision.experimental.LossScaleOptimizer.loss_scaleis
⚡️ now a tensor, not a
LossScaleobject. This means to get a loss scale of a
LossScaleOptimizeras a tensor, you must now call
opt.loss_scaleinstead of
opt.loss_scale().
- The property
should_cast_variableshas been removed from
tf.keras.mixed_precision.experimental.Policy
- When passing a
tf.mixed_precision.experimental.DynamicLossScaleto
⚡️
tf.keras.mixed_precision.experimental.LossScaleOptimizer, the
DynamicLossScale's multiplier must be 2.
- When passing a
tf.mixed_precision.experimental.DynamicLossScaleto
tf.keras.mixed_precision.experimental.LossScaleOptimizer,
⚡️ the weights of the
DynanmicLossScaleare copied into the
LossScaleOptimizerinstead of being reused. This means modifying the
⚡️ weights of the
DynamicLossScalewill no longer affect the weights of the LossScaleOptimizer, and vice versa.
- The global policy can no longer be set to a non-floating point policy in
tf.keras.mixed_precision.experimental.set_policy
- In
Layer.call,
AutoCastVariables will no longer be casted within
MirroredStrategy.runor
ReplicaContext.merge_call. This is
because a thread local variable is used to determine whether
AutoCastVariables are casted, and those two functions run with a
different thread. Note this only applies if one of these two functions is called within
Layer.call; if one of those two functions calls
Layer.call,
AutoCastVariables will still be casted.
tf.data:
tf.data.experimental.service.DispatchServernow takes a config tuple instead of individual arguments. Usages should be updated to
tf.data.experimental.service.DispatchServer(dispatcher_config).
tf.data.experimental.service.WorkerServernow takes a config tuple instead of individual arguments. Usages should be updated to
tf.data.experimental.service.WorkerServer(worker_config).
tf.distribute:
- Removes
tf.distribute.Strategy.experimental_make_numpy_dataset. Please use
tf.data.Dataset.from_tensor_slicesinstead.
- Renames
experimental_hintsin
tf.distribute.StrategyExtended.reduce_to,
tf.distribute.StrategyExtended.batch_reduce_to,
tf.distribute.ReplicaContext.all_reduceto
options:
- Renames
tf.distribute.experimental.CollectiveHintsto
tf.distribute.experimental.CommunicationOptions.
- Renames
tf.distribute.experimental.CollectiveCommunicationto
tf.distribute.experimental.CommunicationImplementation.
- Renames
tf.distribute.Strategy.experimental_distribute_datasets_from_functionto
distribute_datasets_from_functionas it is no longer experimental.
- Removes
tf.distribute.Strategy.experimental_run_v2method, which was deprecated in TF 2.2.
tf.lite:
tf.quantization.quantize_and_dequantize_v2has been introduced, which updates the gradient definition for quantization which is outside the range
to be 0. To simulate the V1 the behavior of
tf.quantization.quantize_and_dequantize(...)use
tf.grad_pass_through(tf.quantization.quantize_and_dequantize_v2)(...).
🐛 Bug Fixes and Other Changes
TF Core:
- 👍 Introduces experimental support for a new module named
tf.experimental.numpy, which
is a NumPy-compatible API for writing TF programs. This module provides class
ndarray, which mimics the
ndarrayclass in NumPy, and wraps an immutable
tf.Tensorunder the hood. A subset of NumPy functions (e.g.
numpy.add) are provided. Their inter-operation with TF facilities is seamless in most cases.
See tensorflow/python/ops/numpy_ops/README.md
👍 for details of what operations are supported and what are the differences from NumPy.
tf.types.experimental.TensorLikeis a new
Uniontype that can be used as type annotation for variables representing a Tensor or a value
that can be converted to Tensor by
tf.convert_to_tensor.
- Calling ops with a python constants or numpy values is now consistent with tf.convert_to_tensor behavior. This avoids operations like
tf.reshape truncating inputs such as from int64 to int32.
- ➕ Adds
tf.sparse.map_valuesto apply a function to the
.values of
SparseTensorarguments.
- The Python bitwise operators for
Tensor(
__and__,
__or__,
__xor__and
__invert__now support non-
boolarguments and apply
👍 the corresponding bitwise ops.
boolarguments continue to be supported and dispatch to logical ops. This brings them more in line with
Python and NumPy behavior.
- ➕ Adds
tf.SparseTensor.with_values. This returns a new SparseTensor with the same sparsity pattern, but with new provided values. It is
similar to the
with_valuesfunction of
RaggedTensor.
- ➕ Adds
StatelessCaseop, and uses it if none of case branches has stateful ops.
- Adds
tf.config.experimental.get_memory_usageto return total memory usage of the device.
- ➕ Adds gradients for
RaggedTensorToVariantand
RaggedTensorFromVariant.
- 👌 Improve shape inference of nested function calls by supporting constant folding across Arg nodes which makes more static values available to shape inference functions.
tf.debugging:
- GPU
- Adds Support for TensorFloat-32 on Ampere based GPUs.
TensorFloat-32, or TF32 for short, is a math mode for NVIDIA Ampere based GPUs which causes certain float32 ops, such as matrix
multiplications and convolutions, to run much faster on Ampere GPUs but with reduced precision. This reduced precision has not been found
to effect convergence quality of deep learning models in practice. TensorFloat-32 is enabled by default, but can be disabled with
tf.config.experimental.enable_tensor_float_32_execution.
tf.math:
- Adds
tf.math.erfcinv, the inverse to
tf.math.erfc.
tf.nn:
tf.nn.max_pool2dnow supports explicit padding.
tf.image:
- Adds deterministic
tf.image.stateless_random_*functions for each
tf.image.random_*function. Added a new op
stateless_sample_distorted_bounding_boxwhich is a deterministic version of
sample_distorted_bounding_boxop. Given the same seed, these stateless functions/ops produce the same results independent of how many times the function is called, and independent of global seed settings.
- Adds deterministic
tf.image.resizebackprop CUDA kernels for
method=ResizeMethod.BILINEAR(the default method). Enable by setting the environment variable
TF_DETERMINISTIC_OPSto
"true"or
"1".
tf.print:
- Bug fix in
tf.print()with
OrderedDictwhere if an
OrderedDictdidn't have the keys sorted, the keys and values were not being printed
in accordance with their correct mapping.
tf.train.Checkpoint:
- Now accepts a
rootargument in the initialization, which generates a checkpoint with a root object. This allows users to create a
Checkpointobject that is compatible with Keras
model.save_weights()and
model.load_weights. The checkpoint is also compatible with the checkpoint saved in the
variables/folder in the SavedModel.
- When restoring,
save_pathcan be a path to a SavedModel. The function will automatically find the checkpoint in the SavedModel.
tf.data:
- Adds new
tf.data.experimental.service.register_datasetand
tf.data.experimental.service.from_dataset_idAPIs to enable one
🖨 process to register a dataset with the tf.data service, and another process to consume data from the dataset.
- ➕ Adds support for dispatcher fault tolerance. To enable fault tolerance, configure a
work_dirwhen running your dispatcher server and set
dispatcher_fault_tolerance=True. The dispatcher will store its state to
work_dir, so that on restart it can continue from its previous
state after restart.
- ➕ Adds support for sharing dataset graphs via shared filesystem instead of over RPC. This reduces load on the dispatcher, improving performance
👷 of distributing datasets. For this to work, the dispatcher's
work_dirmust be accessible from workers. If the worker fails to read from the
work_dir, it falls back to using RPC for dataset graph transfer.
- ➕ Adds support for a new "distributed_epoch" processing mode. This processing mode distributes a dataset across all tf.data workers,
📄 instead of having each worker process the full dataset. See the tf.data service docs to learn more.
- Adds optional
exclude_colsparameter to CsvDataset. This parameter is the complement of
select_cols; at most one of these should be specified.
- We have implemented an optimization which reorders data-discarding transformations such as
takeand
shardto happen earlier in the dataset when it is safe to do so. The optimization can be disabled via the
experimental_optimization.reorder_data_discarding_opsdataset option.
tf.data.Optionswere previously immutable and can now be overridden.
tf.data.Dataset.from_generatornow supports Ragged and Sparse tensors with a new
output_signatureargument, which allows
from_generatorto
produce any type describable by a
tf.TypeSpec.
tf.data.experimental.AUTOTUNEis now available in the core API as
tf.data.AUTOTUNE.
tf.distribute:
- 👍 Introduces experimental support for asynchronous training of Keras models via
tf.distribute.experimental.ParameterServerStrategy:
- Replaces the existing
tf.distribute.experimental.ParameterServerStrategysymbol with a new class that is for parameter server training in TF2. Usage of
the old symbol, usually with Estimator API, should be replaced with [
tf.compat.v1.distribute.experimental.ParameterServerStrategy].
- Added
tf.distribute.experimental.coordinator.*namespace, including the main API
ClusterCoordinatorfor coordinating the training cluster, the related data structure
RemoteValueand
PerWorkerValue.
- ➕ Adds
tf.distribute.Strategy.gatherand
tf.distribute.ReplicaContext.all_gatherAPIs to support gathering dense distributed values.
- 🛠 Fixes various issues with saving a distributed model.
tf.keras:
- 👌 Improvements from the Functional API refactoring:
- Functional model construction does not need to maintain a global workspace graph, removing memory leaks especially when building many
models or very large models.
- Functional model construction should be ~8-10% faster on average.
- Functional models can now contain non-symbolic values in their call inputs inside of the first positional argument.
- Several classes of TF ops that were not reliably converted to Keras layers during functional API construction should now work, e.g.
tf.image.ssim_multiscale
- Error messages when Functional API construction goes wrong (and when ops cannot be converted to Keras layers automatically) should be
clearer and easier to understand.
Optimizer.minimizecan now accept a loss
Tensorand a
GradientTapeas an alternative to accepting a
callableloss.
- ➕ Adds
betahyperparameter to FTRL optimizer classes (Keras and others) to match FTRL paper.
Optimizer. __init__now accepts a
gradient_aggregatorto allow for customization of how gradients are aggregated across devices, as well as
gradients_transformersto allow for custom gradient transformations (such as gradient clipping).
- 👌 Improvements to Keras preprocessing layers:
- TextVectorization can now accept a vocabulary list or file as an init arg.
- Normalization can now accept mean and variance values as init args.
- In
Attentionand
AdditiveAttentionlayers, the
call()method now accepts a
return_attention_scoresargument. When set to
True, the layer returns the attention scores as an additional output argument.
- ➕ Adds
tf.metrics.log_coshand
tf.metrics.logcoshAPI entrypoints with the same implementation as their
tf.lossesequivalent.
- For Keras model, the individual call of
Model.evaluateuses no cached data for evaluation, while
Model.fituses cached data when
🐎
validation_dataarg is provided for better performance.
- Adds a
save_tracesargument to
model.save/
tf.keras.models.save_modelwhich determines whether the SavedModel format stores the Keras model/layer call functions. The traced functions allow Keras to revive custom models and layers without the original class definition, but if this isn't required the tracing can be disabled with the added option.
- The
tf.keras.mixed_precisionAPI is non non-experimental. The
non-experimental API differs from the experimental API in several ways.
tf.keras.mixed_precision.Policyno longer takes in a
tf.mixed_precision.experimental.LossScalein the constructor, and no
longer has a
LossScaleassociated with it. Instead,
Model.compile
⚡️ will automatically wrap the optimizer with a
LossScaleOptimizerusing
dynamic loss scaling if
Policy.nameis "mixed_float16".
tf.keras.mixed_precision.LossScaleOptimizer's constructor takes in
different arguments. In particular, it no longer takes in a
LossScale,
and there is no longer a
LossScaleassociated with the
⚡️
LossScaleOptimizer. Instead,
LossScaleOptimizerdirectly implements
🛠 fixed or dynamic loss scaling. See the documentation of
⚡️
tf.keras.mixed_precision.experimental.LossScaleOptimizer
for details on the differences between the experimental
⚡️
LossScaleOptimizerand the new non-experimental
LossScaleOptimizer.
tf.mixed_precision.experimental.LossScaleand its subclasses are
🗄 deprecated, as all of its functionality now exists within
⚡️
tf.keras.mixed_precision.LossScaleOptimizer
tf.lite:
TFLiteConverter:
- Support optional flags
inference_input_typeand
inference_output_typefor full integer quantized models. This allows users to modify the model input and output type to integer types (
tf.int8,
tf.uint8) instead of defaulting to float type (
tf.float32).
- NNAPI
- Adds NNAPI Delegation support for requantization use cases by converting the operation into a dequantize-quantize pair.
- Removes deprecated
Interpreter.setUseNNAPI(boolean)Java API. Use
Interpreter.Options.setUseNNAPIinstead.
- Deprecates
Interpreter::UseNNAPI(bool)C++ API. Use
NnApiDelegate()and related delegate configuration methods directly.
- Deprecates
Interpreter::SetAllowFp16PrecisionForFp32(bool)C++ API. Prefer controlling this via delegate options, e.g.
tflite::StatefulNnApiDelegate::Options::allow_fp16' orTfLiteGpuDelegateOptionsV2::is_precision_loss_allowed`.
- GPU
- GPU acceleration now supports quantized models by default
DynamicBuffer::AddJoinedString()will now add a separator if the first string to be joined is empty.
- ➕ Adds support for cumulative sum (cumsum), both as builtin op and MLIR conversion.
TensorRT
- Issues a warning when the
session_configparameter for the TF1 converter is used or the
rewrite_config_templatefield in the TF2
converter parameter object is used.
TPU Enhancements:
- ➕ Adds support for the
betaparameter of the FTRL optimizer for TPU embeddings. Users of other TensorFlow platforms can implement equivalent
behavior by adjusting the
l2parameter.
👍 XLA Support:
- 🗄 xla.experimental.compile is deprecated, use
tf.function(experimental_compile=True)instead.
- Adds
tf.function.experimental_get_compiler_irwhich returns compiler IR (currently 'hlo' and 'optimized_hlo') for given input for given function.
🔒 Security:
- 🛠 Fixes an undefined behavior causing a segfault in
tf.raw_ops.Switch, (CVE-2020-15190)
- 🛠 Fixes three vulnerabilities in conversion to DLPack format
- 🛠 Fixes two vulnerabilities in
SparseFillEmptyRowsGrad
- 🛠 Fixes several vulnerabilities in
RaggedCountSparseOutputand
SparseCountSparseOutputoperations
- 🛠 Fixes an integer truncation vulnerability in code using the work sharder API, (CVE-2020-15202)
- 🛠 Fixes a format string vulnerability in
tf.strings.as_string, (CVE-2020-15203)
- 🛠 Fixes segfault raised by calling session-only ops in eager mode, (CVE-2020-15204)
- 🛠 Fixes data leak and potential ASLR violation from
tf.raw_ops.StringNGrams, (CVE-2020-15205)
- 🛠 Fixes segfaults caused by incomplete
SavedModelvalidation, (CVE-2020-15206)
- 🛠 Fixes a data corruption due to a bug in negative indexing support in TFLite, (CVE-2020-15207)
- 🛠 Fixes a data corruption due to dimension mismatch in TFLite, (CVE-2020-15208)
- 🛠 Fixes several vulnerabilities in TFLite saved model format
- 🛠 Fixes several vulnerabilities in TFLite implementation of segment sum
- Fixes a segfault in
tf.quantization.quantize_and_dequantize, (CVE-2020-15265)
- 🛠 Fixes an undefined behavior float cast causing a crash, (CVE-2020-15266)
Other:
- 💅 We have replaced uses of "whitelist" and "blacklist" with "allowlist" and "denylist" where possible. Please see this list for more context.
- Adds
tf.config.experimental.mlir_bridge_rolloutwhich will help us rollout the new MLIR TPU bridge.
- Adds
tf.experimental.register_filesystem_pluginto load modular filesystem plugins from Python
Thanks to our Contributors
🚀 This release contains contributions from many people at Google and external contributors.
8bitmp3, aaa.jq, Abhineet Choudhary, Abolfazl Shahbazi, acxz, Adam Hillier, Adrian Garcia Badaracco, Ag Ramesh, ahmedsabie, Alan Anderson, Alexander Grund, Alexandre Lissy, Alexey Ivanov, Amedeo Cavallo, anencore94, Aniket Kumar Singh, Anthony Platanios, Ashwin Phadke, Balint Cristian, Basit Ayantunde, bbbboom, Ben Barsdell, Benjamin Chetioui, Benjamin Peterson, bhack, Bhanu Prakash Bandaru Venkata, Biagio Montaruli, Brent M. Spell, bubblebooy, bzhao, cfRod, Cheng Chen, Cheng(Kit) Chen, Chris Tessum, Christian, chuanqiw, codeadmin_peritiae, COTASPAR, CuiYifeng, danielknobe, danielyou0230, dannyfriar, daria, DarrenZhang01, Denisa Roberts, dependabot[bot], Deven Desai, Dmitry Volodin, Dmitry Zakharov, drebain, Duncan Riach, Eduard Feicho, Ehsan Toosi, Elena Zhelezina, emlaprise2358, Eugene Kuznetsov, Evaderan-Lab, Evgeniy Polyakov, Fausto Morales, Felix Johnny, fo40225, Frederic Bastien, Fredrik Knutsson, fsx950223, Gaurav Singh, Gauri1 Deshpande, George Grzegorz Pawelczak, gerbauz, Gianluca Baratti, Giorgio Arena, Gmc2, Guozhong Zhuang, Hannes Achleitner, Harirai, HarisWang, Harsh188, hedgehog91, Hemal Mamtora, Hideto Ueno, Hugh Ku, Ian Beauregard, Ilya Persky, jacco, Jakub Beránek, Jan Jongboom, Javier Montalt Tordera, Jens Elofsson, Jerry Shih, jerryyin, jgehw, Jinjing Zhou, jma, jmsmdy, Johan Nordström, John Poole, Jonah Kohn, Jonathan Dekhtiar, jpodivin, Jung Daun, Kai Katsumata, Kaixi Hou, Kamil Rakoczy, Kaustubh Maske Patil, Kazuaki Ishizaki, Kedar Sovani, Koan-Sin Tan, Koki Ibukuro, Krzysztof Laskowski, Kushagra Sharma, Kushan Ahmadian, Lakshay Tokas, Leicong Li, levinxo, Lukas Geiger, Maderator, Mahmoud Abuzaina, Mao Yunfei, Marius Brehler, markf, Martin Hwasser, Martin Kubovčík, Matt Conley, Matthias, mazharul, mdfaijul, Michael137, MichelBr, Mikhail Startsev, Milan Straka, Ml-0, Myung-Hyun Kim, Måns Nilsson, Nathan Luehr, ngc92, nikochiko, Niranjan Hasabnis, nyagato_00, Oceania2018, Oleg Guba, Ongun Kanat, OscarVanL, Patrik Laurell, Paul Tanger, Peter Sobot, Phil Pearl, PlusPlusUltra, Poedator, Prasad Nikam, Rahul-Kamat, Rajeshwar Reddy T, redwrasse, Rickard, Robert Szczepanski, Rohan Lekhwani, Sam Holt, Sami Kama, Samuel Holt, Sandeep Giri, sboshin, Sean Settle, settle, Sharada Shiddibhavi, Shawn Presser, ShengYang1, Shi,Guangyong, Shuxiang Gao, Sicong Li, Sidong-Wei, Srihari Humbarwadi, Srinivasan Narayanamoorthy, Steenu Johnson, Steven Clarkson, stjohnso98, Tamas Bela Feher, Tamas Nyiri, Tarandeep Singh, Teng Lu, Thibaut Goetghebuer-Planchon, Tim Bradley, Tomasz Strejczek, Tongzhou Wang, Torsten Rudolf, Trent Lo, Ty Mick, Tzu-Wei Sung, Varghese, Jojimon, Vignesh Kothapalli, Vishakha Agrawal, Vividha, Vladimir Menshakov, Vladimir Silyaev, VoVAllen, Võ Văn Nghĩa, wondertx, xiaohong1031, Xiaoming (Jason) Cui, Xinan Jiang, Yair Ehrenwald, Yasir Modak, Yasuhiro Matsumoto, Yimei Sun, Yiwen Li, Yixing, Yoav Ramon, Yong Tang, Yong Wu, yuanbopeng, Yunmo Koo, Zhangqiang, Zhou Peng, ZhuBaohe, zilinzhu, zmx
|
https://python.libhunt.com/tensorflow-changelog/2.4.0-rc3
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Reading and writing files and directories with the browser-fs-access library
Browsers have been able to deal with files and directories for a long time. The File API provides features for representing file objects in web applications, as well as programmatically selecting them and accessing their data. The moment you look closer, though, all that glitters is not gold.
The traditional way of dealing with files #
Opening files #
As a developer, you can open and read files via the
<input type="file"> element. In its simplest form, opening a file can look something like the code sample below. The
input object gives you a
FileList, which in the case below consists of just one
File. A
File is a specific kind of
Blob, and can be used in any context that a Blob can.
const openFile = async () => {
return new Promise((resolve) => {
const input = document.createElement('input');
input.type = 'file';
input.addEventListener('change', () => {
resolve(input.files[0]);
});
input.click();
});
};
Opening directories #
For opening folders (or directories), you can set the
<input webkitdirectory> attribute. Apart from that, everything else works the same as above. Despite its vendor-prefixed name,
webkitdirectory is not only usable in Chromium and WebKit browsers, but also in the legacy EdgeHTML-based Edge as well as in Firefox.
Saving (rather: downloading) files #
For saving a file, traditionally, you are limited to downloading a file, which works thanks to the
<a download> attribute. Given a Blob, you can set the anchor's
href attribute to a
blob: URL that you can get from the
URL.createObjectURL() method.
const saveFile = async (blob) => {
const a = document.createElement('a');
a.download = 'my-file.txt';
a.href = URL.createObjectURL(blob);
a.addEventListener('click', (e) => {
setTimeout(() => URL.revokeObjectURL(a.href), 30 * 1000);
});
a.click();
};
The problem #
A massive downside of the download approach is that there is no way to make a classic open→edit→save flow happen, that is, there is no way to overwrite the original file. Instead, you end up with a new copy of the original file in the operating system's default Downloads folder whenever you "save".
The File System Access API #
The File System Access API makes both operations, opening and saving, a lot simpler. It also enables true saving, that is, you can not only choose where to save a file, but also overwrite an existing file.
Opening files #
With the File System Access API, opening a file is a matter of one call to the
window.showOpenFilePicker() method. This call returns a file handle, from which you can get the actual
File via the
getFile() method.
const openFile = async () => {
try {
// Always returns an array.
const [handle] = await window.showOpenFilePicker();
return handle.getFile();
} catch (err) {
console.error(err.name, err.message);
}
};
Opening directories #
Open a directory by calling
window.showDirectoryPicker() that makes directories selectable in the file dialog box.
Saving files #
Saving files is similarly straightforward. From a file handle, you create a writable stream via
createWritable(), then you write the Blob data by calling the stream's
write() method, and finally you close the stream by calling its
close() method.
const saveFile = async (blob) => {
try {
const handle = await window.showSaveFilePicker({
types: [{
accept: {
// Omitted
},
}],
});
const writable = await handle.createWritable();
await writable.write(blob);
await writable.close();
return handle;
} catch (err) {
console.error(err.name, err.message);
}
};
Introducing browser-fs-access #
As perfectly fine as the File System Access API is, it's not yet widely available.
This is why I see the File System Access API as a progressive enhancement. As such, I want to use it when the browser supports it, and use the traditional approach if not; all while never punishing the user with unnecessary downloads of unsupported JavaScript code. The browser-fs-access library is my answer to this challenge.
Design philosophy #
Since the File System Access API is still likely to change in the future, the browser-fs-access API is not modeled after it. That is, the library is not a polyfill, but rather a ponyfill. You can (statically or dynamically) exclusively import whatever functionality you need to keep your app as small as possible. The available methods are the aptly named
fileOpen(),
directoryOpen(), and
fileSave(). Internally, the library feature-detects if the File System Access API is supported, and then imports the corresponding code path.
Using the browser-fs-access library #
The three methods are intuitive to use. You can specify your app's accepted
mimeTypes or file
extensions, and set a
multiple flag to allow or disallow selection of multiple files or directories. For full details, see the browser-fs-access API documentation. The code sample below shows how you can open and save image files.
// The imported methods will use the File
// System Access API or a fallback implementation.
import {
fileOpen,
directoryOpen,
fileSave,
} from '
(async () => {
// Open an image file.
const blob = await fileOpen({
mimeTypes: ['image/*'],
});
// Open multiple image files.
const blobs = await fileOpen({
mimeTypes: ['image/*'],
multiple: true,
});
// Open all files in a directory,
// recursively including subdirectories.
const blobsInDirectory = await directoryOpen({
recursive: true
});
// Save a file.
await fileSave(blob, {
fileName: 'Untitled.png',
});
})();
Demo #
You can see the above code in action in a demo on Glitch. Its source code is likewise available there. Since for security reasons cross origin sub frames are not allowed to show a file picker, the demo cannot be embedded in this article.
The browser-fs-access library in the wild #
In my free time, I contribute a tiny bit to an installable PWA called Excalidraw, a whiteboard tool that lets you easily sketch diagrams with a hand-drawn feel. It is fully responsive and works well on a range of devices from small mobile phones to computers with large screens. This means it needs to deal with files on all the various platforms whether or not they support the File System Access API. This makes it a great candidate for the browser-fs-access library.
I can, for example, start a drawing on my iPhone, save it (technically: download it, since Safari does not support the File System Access API) to my iPhone Downloads folder, open the file on my desktop (after transferring it from my phone), modify the file, and overwrite it with my changes, or even save it as a new file.
Real life code sample #
Below, you can see an actual example of browser-fs-access as it is used in Excalidraw. This excerpt is taken from
/src/data/json.ts. Of special interest is how the
saveAsJSON() method passes either a file handle or
null to browser-fs-access'
fileSave() method, which causes it to overwrite when a handle is given, or to save to a new file if not.
export const saveAsJSON = async (
elements: readonly ExcalidrawElement[],
appState: AppState,
fileHandle: any,
) => {
const serialized = serializeAsJSON(elements, appState);
const blob = new Blob([serialized], {
type: "application/json",
});
const name = `${appState.name}.excalidraw`;
(window as any).handle = await fileSave(
blob,
{
fileName: name,
description: "Excalidraw file",
extensions: ["excalidraw"],
},
fileHandle || null,
);
};
export const loadFromJSON = async () => {
const blob = await fileOpen({
description: "Excalidraw files",
extensions: ["json", "excalidraw"],
mimeTypes: ["application/json"],
});
return loadFromBlob(blob);
};
UI considerations #
Whether in Excalidraw or your app, the UI should adapt to the browser's support situation. If the File System Access API is supported (
if ('showOpenFilePicker' in window) {}) you can show a Save As button in addition to a Save button. The screenshots below show the difference between Excalidraw's responsive main app toolbar on iPhone and on Chrome desktop. Note how on iPhone the Save As button is missing.
Conclusions #
Working with system files technically works on all modern browsers. On browsers that support the File System Access API, you can make the experience better by allowing for true saving and overwriting (not just downloading) of files and by letting your users create new files wherever they want, all while remaining functional on browsers that do not support the File System Access API. The browser-fs-access makes your life easier by dealing with the subtleties of progressive enhancement and making your code as simple as possible.
Acknowledgements #
This article was reviewed by Joe Medley and Kayce Basques. Thanks to the contributors to Excalidraw for their work on the project and for reviewing my Pull Requests. Hero image by Ilya Pavlov on Unsplash.
|
https://web.dev/browser-fs-access/
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Original version of the paper for the 2018 Jacksonville meeting.
Simplify the paper to a simple set of editorial suggestions.
The Lirary Fundamentals series of TSes are expected to produce new revisions several times per standard cycle. The current draft should be updated to reflect that C++17 was published shortly after the Albuqerque 2017 meeting.
The library Fundamentals TS is expected to serve as a long-running sequence of TSes that provide experience on new library features that may be added to future standards. With the publication of the latest C++ Standard in 2017, we should rebase the current TS on that standard before adding new experimental components. The first of those components are expected to land soon.
This paper proposes the simplest imagined rebasing of the document. It does not attempt to apply new C++17 language features to existing components. For example, it has not performed a review for class template deduction guides. It is expected such reviews will be much simpler once the text for the landed components is excised.
Similarly, this paper makes no attempt to resolve awkward wording updates where the underlying text of the referenced C++ standard has changes substantially between C++14 and C++17. In such cases, it provides a simple issues list to track the necessary updates, which should be provided by experts in the affected components.
First, we propose excising all components that have merged into the main standard, and update any remaining internal cross-references to point to the C++17 standard for their specification.
Then we update document references in clause 1, then update all numbered cross-references into the new standard to use the updated standard ISO clause numbering.
Finally, we give the project editor guidance on how to apply a few simple patterns to change the remaining text to refer to the updated experimental namespace. Similarly, we leave it as an exercise for the project editor to fix up cross-references from the C++14 standard to the C++17 standard.
A more detailed rebasing was attempted, but produced a much longer document than the Library Working Group would have an easy time reviewing during a meeting. The majority of the extra text were seen as minor changes performing obvious tasks such as fixing up cross references, and applying consistent editing patterns such as renaming the experimental namespace. It was seen as more appropriate to give editorial direction to the project editor to handle those cases, than have a detailed line-by-line review in LWG session.
Completely excise from the document all the sections marked as
deleted in the index table below.
- 1 General
- 1.1 Scope
- 1.2 Normative references
- 1.3 Namespaces, headers, and modifications to standard classes
-
1.4 Terms and definitions
-
1.4.1 direct-non-list-initialization
- 1.5 Future plans (Informative)
- 1.6 Feature-testing recommendations (Informative)
- 2 Modifications to the C++ Standard Library
- 2.1 Uses-allocator construction
- 3 General utilities library
- 3.1 Utility components
- 3.1.1 Header <experimental/utility> synopsis
- 3.1.2 Class erased_type
-
3.2 Tuples
-
3.2.1 Header <experimental/tuple> synopsis
-
3.2.2 Calling a function with a tuple of arguments
- 3.3 Metaprogramming and type traits
- 3.3.1 Header <experimental/type_traits> synopsis
- 3.3.2 Other type transformations
-
3.3.3 Logical operator traits
- 3.3.4 Detection idiom
-
3.4 Compile-time rational arithmetic
-
3.4.1 Header <experimental/ratio> synopsis
-
3.5 Time utilities
-
3.5.1 Header <experimental/chrono> synopsis
-
3.6 System error support
-
3.6.1 Header <experimental/system_error> synopsis
- 3.7 Class template propagate_const
- 3.7.1 Class template propagate_const general
- 3.7.2 Header <experimental/propagate_const> synopsis
- 3.7.3 propagate_const requirements on T
- 3.7.3.1 propagate_const requirements on class type T
- 3.7.4 propagate_const constructors
- 3.7.5 propagate_const assignment
- 3.7.6 propagate_const const observers
- 3.7.7 propagate_const non-const observers
- 3.7.8 propagate_const modifiers
- 3.7.9 propagate_const relational operators
- 3.7.10 propagate_const specialized algorithms
- 3.7.11 propagate_const underlying pointer access
- 3.7.12 propagate_const hash support
- 3.7.13 propagate_const comparison function objects
- 4 Function objects
- 4.1 Header <experimental/functional> synopsis
- 4.2 Class template function
- 4.2.1 function construct/copy/destroy
- 4.2.2 function modifiers
-
4.3 Searchers
-
4.3.1 Class template default_searcher
-
4.3.1.1 default_searcher creation functions
-
4.3.2 Class template boyer_moore_searcher
-
4.3.2.1 boyer_moore_searcher creation functions
-
4.3.3 Class template boyer_moore_horspool_searcher
-
4.3.3.1 boyer_moore_horspool_searcher creation functions
-
4.4 Function template not_fn
-
5 Optional objects
-
5.1 In general
-
5.2 Header <experimental/optional> synopsis
-
5.3 optional for object types
-
5.3.1 Constructors
-
5.3.2 Destructor
-
5.3.3 Assignment
-
5.3.4 Swap
-
5.3.5 Observers
-
5.4 In-place construction
-
5.5 No-value state indicator
-
5.6 Class bad_optional_access
-
5.7 Relational operators
-
5.8 Comparison with nullopt
-
5.9 Comparison with T
-
5.10 Specialized algorithms
-
5.11 Hash support
-
6 Class any
-
6.1 Header <experimental/any> synopsis
-
6.2 Class bad_any_cast
-
6.3 Class any
-
6.3.1 any construct/destruct
-
6.3.2 any assignments
-
6.3.3 any modifiers
-
6.3.4 any observers
-
6.4 Non-member functions
-
7 string_view
-
7.1 Header <experimental/string_view> synopsis
-
7.2 Class template basic_string_view
-
7.3 basic_string_view constructors and assignment operators
-
7.4 basic_string_view iterator support
-
7.5 basic_string_view capacity
-
7.6 basic_string_view element access
-
7.7 basic_string_view modifiers
-
7.8 basic_string_view string operations
-
7.8.1 Searching basic_string_view
-
7.9 basic_string_view non-member comparison functions
-
7.10 Inserters and extractors
-
7.11 Hash support
- 8 Memory
- 8.1 Header <experimental/memory> synopsis
-
8.2 Shared-ownership pointers
-
8.2.1 Class template shared_ptr
-
8.2.1.1 shared_ptr constructors
-
8.2.1.2 shared_ptr observers
-
8.2.1.3 shared_ptr casts
-
8.2.1.4 shared_ptr hash support
-
8.2.2 Class template weak_ptr
-
8.2.2.1 weak_ptr constructors
- 8.3 Type-erased allocator
- 8.4 Header <experimental/memory_resource> synopsis
-
8.5 Class memory_resource
-
8.5.1 Class memory_resource overview
-
8.5.2 memory_resource public member functions
-
8.5.3 memory_resource protected virtual member functions
-
8.5.4 memory_resource equality
-
8.6 Class template polymorphic_allocator
-
8.6.1 Class template polymorphic_allocator overview
-
8.6.2 polymorphic_allocator constructors
-
8.6.3 polymorphic_allocator member functions
-
8.6.4 polymorphic_allocator equality
- 8.7 template alias resource_adaptor
- 8.7.1 resource_adaptor
- 8.7.2 resource_adaptor_imp constructors
- 8.7.3 resource_adaptor_imp member functions
-
8.8 Access to program-wide memory_resource objects
-
8.9 Pool resource classes
-
8.9.1 Classes synchronized_pool_resource and unsynchronized_pool_resource
-
8.9.2 pool_options data members
-
8.9.3 pool resource constructors and destructors
-
8.9.4 pool resource members
-
8.10 Class monotonic_buffer_resource
-
8.10.1 Class monotonic_buffer_resource overview
-
8.10.2 monotonic_buffer_resource constructor and destructor
-
8.10.3 monotonic_buffer_resource members
-
8.11 Alias templates using polymorphic memory resources
-
8.11.1 Header <experimental/string> synopsis
-
8.11.2 Header <experimental/deque> synopsis
-
8.11.3 Header <experimental/forward_list> synopsis
-
8.11.4 Header <experimental/list> synopsis
-
8.11.5 Header <experimental/vector> synopsis
-
8.11.6 Header <experimental/map> synopsis
-
8.11.7 Header <experimental/set> synopsis
-
8.11.8 Header <experimental/unordered_map> synopsis
-
8.11.9 Header <experimental/unordered_set> synopsis
-
8.11.10 Header <experimental/regex> synopsis
- 8.12 Non-owning pointers
- 8.12.1 Class template observer_ptr overview
- 8.12.2 observer_ptr constructors
- 8.12.3 observer_ptr observers
- 8.12.4 observer_ptr conversions
- 8.12.5 observer_ptr modifiers
- 8.12.6 observer_ptr specialized algorithms
- 8.12.7 observer_ptr hash support
- 9 Containers
- 9.1 Uniform container erasure
- 9.1.1 Header synopsis
- 9.1.2 Function template erase_if
- 9.1.3 Function template erase
- 9.2 Class template array
- 9.2.1 Header <experimental/array> synopsis
- 9.2.2 Array creation functions
- 10 Iterators library
- 10.1 Header <experimental/iterator> synopsis
- 10.2 Class template ostream_joiner
- 10.2.1 ostream_joiner constructor
- 10.2.2 ostream_joiner operations
- 10.2.3 ostream_joiner creation function
- 11 Futures
- 11.1 Header <experimental/future> synopsis
- 11.2 Class template promise
- 11.3 Class template packaged_task
- 12 Algorithms library
- 12.1 Header <experimental/algorithm> synopsis
-
12.2 Search
-
12.3 Sampling
- 12.4 Shuffle
- 13 Numerics library
-
13.1 Generalized numeric operations
-
13.1.1 Header <experimental/numeric> synopsis
-
13.1.2 Greatest common divisor
-
13.1.3 Least common multiple
- 13.2 Random number generation
- 13.2.1 Header <experimental/random> synopsis
- 13.2.2 Utilities
- 13.2.2.1 Function template randint
- 14 Reflection library
- 14.1 Class source_location
- 14.1.1 Header <experimental/source_location> synopsis
- 14.1.2 source_location creation
- 14.1.3 source_location field accessEditor's note: Suggest move clause 8.12 (observer pointer) up the document to be adjacent to its related header synopsis, or move 8.1 down.
Note that in addition to the changes below, there may be a necessary application of an ISO template for clauses 1-3, and a subsequent renumbering,
1 General [general]
1.1 Scope [general.
1.2 Normative references [general.references]
- The following referenced document is indispensable for the application of this document. For dated references, only the edition cited applies. For undated references, the latest edition of the referenced document (including any amendments) applies.
- ISO/IEC 14882:201
4, Programming Languages — C++
- ISO/IEC 14882:— is herein called the C++ Standard. References to clauses within the C++ Standard are written as "C++1
4§3.2". The library described in ISO/IEC 14882:— clauses 17–30is herein called the C++ Standard Library.
- Unless otherwise specified, the whole of the C++ Standard's Library introduction (
C++14 §17) is included into this Technical Specification by reference.
1.3 Namespaces, headers, and modifications to standard classes [general.namespaces]
- Since the extensions described in this technical specification are experimental and not part of the C++ standard library, they should not be declared directly within namespace std. Unless otherwise specified, all components described in this technical specification either:
[ Example: This TS does not define std::experimental::fundamentals_v[ Example: This TS does not define std::experimental::fundamentals_v
- modify an existing interface in the C++ Standard Library in-place,
- are declared in a namespace whose name appends ::experimental::fundamentals_v
2to a namespace defined in the C++ Standard Library, such as std or std::chrono, or
- are declared in a subnamespace of a namespace described in the previous bullet, whose name is not the same as an existing subnamespace of namespace std.
2:: chronobecause the C++ Standard Library defines std:: chrono. This TS does not define std::pmr::experimental::fundamentals_v2 because the C++ Standard Library does not define std::pmr. — end example ]
- Each header described in this technical specification shall import the contents of std::experimental::fundamentals_v
2into std::experimental as if by namespace std { namespace experimental { inline namespace fundamentals_v2 {} } }Note for the future: It would have been much simpler if the following syntax were permitted, but that will require a separate proposal through EWG and Core, so would bind against C++20 at the earliest:namespace std::experimental:: fundamentals_v3::pmr { // contents... }
- This technical specification also describes some experimental modifications to existing interfaces in the C++ Standard Library. These modifications are described by quoting the affected parts of the standard and using underlining to represent added text and strike-through to represent deleted text.
- Unless otherwise specified, references to other entities described in this technical specification are assumed to be qualified with std::experimental::fundamentals_v
2::, and references to entities described in the standard are assumed to be qualified with std::.
- Extensions that are expected to eventually be added to an existing header
are provided inside the <experimental/meow> header, which shall include the standard contents of <meow> as if by#include <meow>
- New headers are also provided in the <experimental/> directory, but without such an #include.
1.4 Terms and definitions [general.defns]
-
For the purposes of this document, the terms and definitions given in the C++ Standard and the following apply.
1.4.1 [general.defns.direct-non-list-init] direct-non-list-initialization
A direct-initialization that is not list-initialization.
1.5 Future plans (Informative) [general.plans]
-
3, std::experimental::fundamentals_v 4,.
1.6 Feature-testing recommendations (Informative) [general.feature.test]
- For the sake of improved portability between partial implementations of various C++ standards, WG21 (the ISO technical committee for the C++ programming language) recommends that implementers and programmers follow the guidelines in this section concerning feature-test macros. [ Note: WG21's SD-6 makes similar recommendations for the C++ Standard itself. — end note ]
->)).)
There are a couple or repeating patterns in the normative text following the header synopses that should be applied universally. First, replace all opening/closing namespaces matching the following pattern:
namespace std {namespace experimental { inline namespace fundamentals_v 2{ // some class definition or other specification ... } // namespace fundamentals_v 2} // namespace experimental } // namespace std
An example of updating a header synopsis:
14.1.1 Header <experimental/source_location> synopsis [reflection.src_loc.synop]
namespace std {namespace experimental { inline namespace fundamentals_v 2{ struct source_location { // 14.1.2, source_location creation static constexpr source_location current() noexcept; constexpr source_location() noexcept; // 14.1.3, source_location field access constexpr uint_least32_t line() const noexcept; constexpr uint_least32_t column() const noexcept; constexpr const char* file_name() const noexcept; constexpr const char* function_name() const noexcept; }; } // namespace fundamentals_v 2} // namespace experimental } // namespace std
- [ Note: The intent of source_location is to have a small size and efficient copying. — end note ]
Some parts of the TS update wording in the main standard, so require normative updates to the cross-reference immediates.
2.1 Uses-allocator construction [mods.allocator.uses]
20.7.7 uses_allocator [allocator.uses]
20.7.7.1 uses_allocator trait [allocator.uses.trait]
20.7.7.2 uses-allocator construction [allocator.uses.construction]
Next, the remaining section on type-erased allocators should be using the pmr facility from the main std::pmr namespace that landed in C++17, as there is no further experimental version of this feature.
8.3 Type-erased allocator [memory.type.erased.allocator]
- A type-erased allocator is an allocator or memory resource, alloc, used to allocate internal data structures for an object X of type C, but where C is not dependent on the type of alloc. Once alloc has been supplied to X (typically as a constructor argument), alloc can be retrieved from X only as a pointer rptr of static type std
::experimental::pmr::memory_resource* ( 8.5). The process by which rptr is computed from alloc depends on the type of alloc as described in Table 15:
- Additionally, class C shall meet the following requirements:
- C::allocator_type shall be identical to std::experimental::erased_type.
- X.get_memory_resource() returns rptr.
Then, there are many references to the C++14 standard, denoted thusly: (C++14 §17.6.3.5). They should be replaced as an editorial action with their corresponding reference to the C++17 standard, as (C++17 §20.5.3.5).
Finally, there are a few stylistic cleanups to apply
Thanks to the initial reviewers of R0 of this document, that helped produce this simplified document, and especially to Geoffrey Romer as editor of the Fundamentals TS who agreed that much of the fine detail was better left as an editorial task he would have to pick up.
|
https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0996r1.html
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Imports?
Table of Contents
When writing software it is generally considered best practice to separate our code into differing physical or logical files, generally with code relating to the same sub problem in the same file or directory structure. This improves code reuse and maintenance. Then, when we wish to make use of code defined in a separate module (likely within a package) we can make use of the variants of import statement which Concurnas supports, described herein.
We can import the following assets from other files: functions (including extension functions), classes, module variables (including those referring to function references), typedefs, enums and annotations.
Imports are essentially referential sugar. They help us to avoid the need to type the fully-qualified path of the asset we are making use of, instead being able to refer to the asset by its short name (the text after the full package name final dot) or by a name we assign at import point (via the
as keyword). At compile time any references to the short name we are making use of are mapped to the fully-qualified path before being passed to the classloader (default or otherwise) for loading. For example, a fully-qualified package name for class
MyClass might be:
com.mycode.MyClass, the short name is
MyClass.
Since Concurnas can run on the JVM, it is compatible with all other code compiled in JVM languages such as Scala, Kotlin, Clojure etc and of course Java. All that is necessary to import other JVM compiled code is is the compiled class itself (and appropriately supporting classloader).
There are three supported import statements. All but the star import may use the
as clause to override the short name of the imported asset. In the following examples we import a class asset, but this may just as well be a function, module variable etc:
Import?
import com.mycode.MyClass //MyClass will be imported with a short name of: MyClass import com.mycode.MyClass as ImportedClass //using 'as' enables us to override the short name of the asset imported
Recall that importing an asset allows us to use it within having to refer to the fully-qualified name of the asset. So the following are equivalent:
import com.mycode.MyClass as ImportedClass //using as enables us to override the short name of the asset imported inst1 = new ImportedClass()//mapped to: com.mycode.MyClass behind the scences inst2 = new com.mycode.MyClass()
From Import?
From import is particularly useful where we wish to import more than one asset from the same package path:
from com.mycode import MyClass from com.mycode import MyOtherClass, MyOtherClass2 //As with conventional import we can override the short names of the imported assets: from com.mycode import MyClass as ImportedClass from com.mycode import MyOtherClass as ImportedClass, MyOtherClass2 as ImportedClass2
Star Import?
If we wish to import all assets under a package name path, then we can use the star notation:
from com.mycode import * import com.mycode.*
Import star should be used with careful consideration as it can easily cause problems with overuse as short names may conflict with one another.
Import sites?
The variants of the aforementioned import statements may be used at any point in Concurnas code. They follow the normal scoping rules:
def myfunc(){ from com.mycode import MyClass mc = MyClass() //MyClass can now be used within the { } and any nested scopes if(acondition){ mc2 = MyClass() //MyClass may be used here } } //MyClass may not be be used from this point onwards as this is outside the imported scope
Most of the time however, convention dictates that imports are best placed at the top of a module code file for global usage inside said module.
Using imports?
The
import statement is "side effect free" - that is to say that no code will be directly run at the point in which an asset is imported, instead this only takes place when the asset is used for the first time. This behaviour is in contrast to the likes of Python which, at the point where code is imported, any top level code present within the imported script will be executed.
It is not essential to use import statements in one's code, one could simply refer to the fully-qualified paths of the assets of interest. For example, instead of importing
java.util.ArrayList for use in a new object instantiation, one could just use the fully-qualified name, i.e:
mylist = new java.utils.ArrayList<String>()
Packages?
An asset's fully-qualified importable package name is a function of its name within the module it's declared within, the module name and its path relative to the root of compilation at compilation time. Note that Concurnas does not have a package keyword, instead it relies upon the directory structure relative to the root of compilation in order to determine this. So for example, using a conventional directory structure (found in almost all operating systems) when we compile our code if our root was set to
/home/project/src and our code within this root, in a directory structure
./com/mycode.conc (i.e. file
mycode.conc is within subdirectory
com containing the class definition
MyClass) - then the fully qualified package name of the class at compilation time would be
com.mycode.MyClass.
Default imports?
The following packages are imported by default for all Concurnas code. Thus the short names of the Classes within these paths are directly usable within Concurnas code without an explicit import being required:
java.lang.*
com.concurnas.lang.*
com.concurnas.lang.datautils.*
Prohibited imports?
There are some Classes which one may not directly use in ones Concurnas code for various practical reasons:
java.lang.Thread
com.concurnas.runtime.cps.IsoRegistrationSet
com.concurnas.runtime.ConcImmutable
com.concurnas.runtime.ConcurnificationTracker
com.concurnas.bootstrap.runtime.cps.Fiber
com.concurnas.lang.ParamName
java.util.concurrent.ForkJoinWorkerThread
com.concurnas.bootstrap.runtime.InitUncreatable
|
https://concurnas.com/docs/imports.html
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Back to: C#.NET Tutorials For Beginners and Professionals
Method Overloading in C# with Examples
In this article, I am going to discuss Method Overloading in C# with Examples. Please read our previous article before proceeding to this article where we discussed the basics of Polymorphism in C#. At the end of this article, you will have a very good understanding of the following pointers related to Method Overloading.
- What is Method Overloading in C#?
- When should we overload methods?
- What are the advantages of using Method Overloading in C#?
- When is a method considered as an overloaded method?
- What is the execution control flow of overloaded methods in C#?
- What is Inheritance based overloading?
- Real-time scenarios where you need to use Method Overloading?
Note: The point that you need to keep in mind is function overloading and method overloading terms are interchangeably used. Method overloading is one of the common ways to implement Compile-Time Polymorphism in C#.
What is Method Overloading or Function Overloading in C#?
If we are defining multiple methods with the same name but with a different signature in a class or in the Parent and Child class, then it is called Method Overloading in C#. That means C#.NET allows us to create a method in the derived class with the same name as the method name defined in the base class.
In simple words, we can say that the Method Overloading in C# allows a class to have multiple methods with the same name but with a different signature. The functions or methods can be overloaded based on the number, type (int, float, etc), order, and kind (Value, Ref or Out) of parameters. For better understanding, please have a look at the below image.
The signature of a method consists of the name of the method and the data type, number, order, and kind (Value, Ref or Out) of parameters.
Note: The point that you need to keep in mind is that the signature of a method does not include the return type and the params modifiers. So it is not possible to overload a method just based on the return type and params modifier.
We can compare the function overloading with a person overloading. For example, if a person has already some work to do and if we are assigning some additional work to that person then the person’s work will be overloaded. In the same way, a function will have already some work to do and if we are assigning some additional work to that function, then we can say that the function is overloaded.
When should we overload methods in C#?
If you want to execute the same logic but with different types and numbers of arguments, then you need to overload the methods. For example, if you want to add two integers, two floats, and two strings, then you need to define three methods with the same name as shown in the below example.
namespace PolymorphismDemo { class Program { public void add(int a, int b) { Console.WriteLine(a + b); } public void add(float x, float y) { Console.WriteLine(x + y); } public void add(string s1, string s2) { Console.WriteLine(s1 + s2); } static void Main(string[] args) { Program obj = new Program(); obj.add(10, 20); obj.add(10.5f, 20.5f); obj.add("pranaya", "kumar"); Console.WriteLine("Press any key to exist."); Console.ReadKey(); } } }
Output:
What are the advantages of using Method Overloading in C#? Or what are the disadvantages if we define methods with a different name?
If we overload the methods, then the user of our application gets comfort feeling in using the method with an impression that he/she calling one method by passing different types of values. The best example for us is the system-defined “WriteLine()” method. It is an overloaded method, not a single method taking different types of values.
When is a method considered as an overloaded method in C#?
If two methods have the same method name but with different signatures, then those methods are considered overloaded methods. Then the rule we should, and no Runtime Error. Methods can be overloaded in the same or in super and sub classes because overloaded methods are different methods. But we can’t override a method in the same class it leads to Compile Time Error: “method is already defined” because overriding methods are the same methods with a different implementation.
What is the execution control flow of overloaded methods?
The compiler always checks for the called method definition in the reference variable type class with the given argument type parameter. So in searching and executing a method definition, we must consider both reference variable type and argument type. The Referenced variable type for deciding from which class method should be to bind. Argument type for deciding which overloaded method should be to bind.
For example:
B b = new B();
b.m1(50) => b.m1(int);
In the above method call, the compiler searches m1() method definition in the “B” class with integer parameter at the time of program compilation, and if it found that method then it binds that method definition. The compiler will searches in the B class because the type of the reference variable b is B type.
A a = new B();
a.m1(50); => a.m1(int);
In the above method call, at the time of compilation, the compiler will search m1() method definition in the “A” class with an integer parameter not in the B class even though the object is B. This is because, at compilation time, the compiler will check only the reference variable type, not the object type it holds. And here, the reference variable a type is A and it holds the object whose type is B.
What is Inheritance-Based Overloading in C#?
A method that is defined in the parent class can also be overloaded under its child class. It is called Inheritance-Based Overloading in C#. See the following example for a better understanding. As you can see in the below code, we have defined the add method twice in the ADD1 class and also defined the add method in the child ADD2 class. Here, notice every add method taking different types of parameters.
namespace PolymorphismDemo { class ADD1 { public void add(int a, int b) { Console.WriteLine(a + b); } public void add(float x, float y) { Console.WriteLine(x + y); } } class ADD2 : ADD1 { public void add(string s1, string s2) { Console.WriteLine(s1 + s2); } } class Program { static void Main(string[] args) { ADD2 obj = new ADD2(); obj.add(10, 20); obj.add(10.5f, 20.5f); obj.add("pranaya", "kumar"); Console.WriteLine("Press any key to exist."); Console.ReadKey(); } } }
Output:
Note: To overload a parent class method under its child class the child class does not require any permission from its parent class.
Real-life Scenario of Method Overloading in C#
Suppose you are working on a maintenance project. And you are going to work on a class where already some parameterized constructors have been defined and you need to pass some additional parameters. So what you will do, either add the required parameter with one of the already defined constructors or add a new constructor as per your requirement. In such cases, you should not add the required parameter with the already defined constructor because this may disturb your other class dependency structure. So what you will do is create a new constructor with the required parameter. That new constructor that you are creating is nothing but the constructor overloading.
Example: Constructor Overloading in C#
Please have a look at the following example. Here, we are creating three different versions of the Constructor, and each constructor taking a different number of parameters and this is called Constructor Overloading in C#.
using System; namespace ConstructorOverloading { class ConstructorOverloading { int x, y, z; public ConstructorOverloading(int x) { this.x = 10; } public ConstructorOverloading(int x, int y) { this.x = x; } public ConstructorOverloading(int x, int y, int z) { this.x = x; } } class Test { static void Main(string[] args) { ConstructorOverloading obj1 = new ConstructorOverloading(10); ConstructorOverloading obj2 = new ConstructorOverloading(10, 20); ConstructorOverloading obj3 = new ConstructorOverloading(10, 20, 30); Console.ReadKey(); } } }
In the next article, I am going to discuss Method Overriding in C# with Examples. Here, in this article, I try to explain What exactly Method Overloading is in C# and when and how to use Method Overloading in C# with examples. I hope you enjoy this Method Overloading in C# with Examples article.
|
https://dotnettutorials.net/lesson/function-overloading-csharp/
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
In the Imagenette/woof code, there are 3 pixel size options: px: 128,192,256.
Looking at the dataloaders, there is the code:
def get_dls(size, woof, bs, sh=0., workers=None): if size<=224: path = URLs.IMAGEWOOF_320 if woof else URLs.IMAGENETTE_320 else : path = URLs.IMAGEWOOF if woof else URLs.IMAGENETTE
Why is the IMAGENETTE_160 never used? What is it for?
Also, in general, could someone say what the different pixel sizes are really for? For example, why do we want to resize to 128 from something with a 320px minimum dimension?
Help is appreciated!
|
https://forums.fast.ai/t/imagenette-woof-whats-the-160px-size-for/65168
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
How to Unzip file in Python
In this article, we will learn how one can perform unzipping of a file in Python. We will use some built-in functions, some simple approaches, and some custom codes as well to better understand the topic. Let's first have a quick look over what is a zip file and why we use it.
What is a Zip File?
ZIP is the archive file format that permits the first information to be totally recreated from the compacted information. A zip file is a single file containing one or more compressed files, offering an easy way to make large files smaller and keep related files together. Python
ZipFile is a class of
zipfile module for reading and writing zip files. We need zip files to lessen storage necessities and to improve transfer speed over standard connections.
A zip folder consisted of several files, in order to use the contents of a zip folder, we need to unzip the folder and extract the documents inside it. Let's learn about different ways to unzip a file in Python and saving the files in the same or different directory.
Python Zipfile Module
Python
ZipFile module provides several methods to handle file compress operations. It uses context manager construction. Its
extractall() function is used to extract all the files and folders present in the zip file. We can use
zipfile.extractall() function to unzip the file contents in the same directory as well as in a different directory.
Let us look at the syntax first and then the following examples.
Syntax
extractall(path, members, pwd)
Parameters
path - It is the location where the zip file is unzipped, if not provided it will unzip the contents in the current directory.
members - It shows the list of files to be unzipped, if not provided it will unzip all the files.
pwd - If the zip file is encrypted then the password is given, the default is None.
Example: Extract all files to the current directory
In the given example, we have a zip file in our current directory. To unzip it first create a ZipFile object by opening the zip file in read mode and then call extractall() on that object. It will extract all the files in the current directory. If the file path argument is provided, then it will overwrite the path.
#import zipfile module from zipfile import ZipFile with ZipFile('filename.zip', 'r') as f: #extract in current directory f.extractall()
Example: Extract all files to a different directory
In the given example, the directory does not exist so we name our new directory as "dir" to place all extracted files from "filename.zip". We pass the destination location as an argument in extractall(). The path can be relative or absolute.
from zipfile import ZipFile with ZipFile('filename.zip', 'r') as f: #extract in different directory f.extractall('dir')
Example: Extract selected files to a different directory
This method will unzip and extract only a particular list of files from all the files in the archive. We can unzip just those files which we need by passing a list of names of the files. In the given example, we used a dataset of 50 students (namely- roll1, roll2, ..., roll50) and we need to extract just the data of those students whose roll no is 7, 8, and 10. We make a list containing the names of the necessary files and pass this list as a parameter to extractall() function.
#import zipfile and os module import zipfile import os #list of necessary files list_of_files=['roll7.txt','roll8.txt','roll10.txt'] with zipfile.ZipFile("user.zip","r") as f: f.extractall('students',members = list_of_files) print("List of extracted files- ") #loop to print necessary files p=os.path.join(os.getcwd(),'students') for item in os.listdir(path=p): print(item)
List of extracted files- roll7.txt roll8.txt roll10.txt
Python Shutil Module
Zipfile provides specific properties to unzip files but it is a somewhat low-level library module. Instead of using zipfile the alternate is
shutil module. It is a higher-level function as compared to zipfile. It performs high-level operations on files and the collection of files. It uses
unpack.archive() to unpack the file, Let us look at the below example to understand it.
Syntax
shutil.unpack_archive(filename , extract_dir)
Parameters
unpack_archive - It detects the compression format automatically from the "extension" of the filename (.zip, .tar.gz, etc)
filename - It can be any path-like object (e.g. pathlib.Path instances). It represents the full path of the file.
extract_dir (optional) - It can be any path-like object (e.g. pathlib.Path instances) that represents the path of the target directory where the file is unpacked. If not provided the current working directory is used as the target directory.
Example: Extract all files to a different directory
# importing shutil module import shutil # Path of the file filename = "/home/User/Desktop/filename.zip" # Target directory extract_dir = "/home/username/Documents" # Unzip the file shutil.unpack_archive(filename, extract_dir)
Conclusion
In this article, we learned to unzip files by using several built-in functions such as
extractall(),
shutil() and different examples to store extracted contents in different directories. We learned about zip files and their Python module.
|
https://www.studytonight.com/python-howtos/how-to-unzip-file-in-python
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Back to: C#.NET Tutorials For Beginners and Professionals
Extension Methods in C# with Examples
In this article, I am going to discuss the Extension Methods in C# with examples. Please read our previous article where we discussed Sealed Class and Sealed Methods in C#. At the end of this article, you will understand what exactly C# Extension Methods are and when and how to use these extension methods in C#?
What are Extension Methods in C#?
It is a new feature that has been added in C# 3.0 which allows us to add new methods into a class without editing the source code of the class i.e. if a class consists of a set of members in it and in the future if you want to add new methods into the class, you can add those methods without making any changes to the source code of the class.
Extension methods can be used as an approach to extending the functionality of a class in the future if the source code of the class is not available or we don’t have any permission in making changes to the class.
Before extension methods, inheritance is an approach that used for extending the functionality of a class i.e. if we want to add any new members into an existing class without making a modification to the class, we will define a child class to that existing class and then we add new members in the child class.
In the case of an extension method, we will extend the functionality of a class by defining the methods, we want to add into the class in a new class and then bind them to an existing class.
Both these approaches can be used for extending the functionalities of an existing class whereas, in inheritance, we call the method defined in the old and new classes by using object of the new class whereas, in the case of extension methods, we call the old and new methods by using object of the old class.
Extension Methods Example in C#:
Let us understand C# Extension Methods with an example. Create a console application and then add a class file with the name OldClass.cs and then copy and paste the following code in it.
public class OldClass { public int x = 100; public void Test1() { Console.WriteLine("Method one: " + this.x); } public void Test2() { Console.WriteLine("Method two: " + this.x); } }
Now our requirement is to add three new methods to the class OldClass. But we don’t want to change the source code of OldClass. Then we can achieve this with the help of extension methods. Let’s create a new class with the name NewClass.cs and then copy and paste the following code in it.
public static class NewClass { public static void Text3(this OldClass O) { Console.WriteLine("Method Three"); } public static void Text4(this OldClass O, int x) { Console.WriteLine("Method Four: " + x); } public static void Text5(this OldClass O) { Console.WriteLine("Method Five:" + O.x); } }
Let us first test the application, then we will understand the extension methods. Now to test whether the methods are accessed using the old class objects or not, add a class Program.CS and write the following code
public class Program { static void Main(string[] args) { OldClass obj = new OldClass(); obj.Test1(); obj.Test2(); //Calling exrension methods obj.Text3(); obj.Text4(10); obj.Text5(); Console.ReadLine(); } }
Now, run the application and see everything is working as expected and it will display the following output.
Points to Remember while working with C# Extension methods:
- Extension methods must be defined only under the static class.
- As an extension method is defined under a static class, compulsory that the method should be defined as static whereas once the method is bound with another class, the method changes into non-static.
- The first parameter of an extension method is known as the binding parameter which should be the name of the class to which the method has to be bound and the binding parameter should be prefixed with this keyword.
- An extension method can have only one binding parameter and that should be defined in the first place of the parameter list.
- If required, an extension method can be defined with a normal parameter also starting from the second place of the parameter list.
Extension Method Real-time Example:
Let us see one real-time scenario where you can use the extension method. As we know string is a built-in class provided by .NET Framework. That means the source code of this class is not available to us and hence we can change the source code of the string class. Now our requirement is to add a method to the String class i.e. GetWordCount() and that method will return the number of words present in a string and we should call this method as shown in the below image.
You can achieve the above using Extension Methods. First, create a class with the name StringExtension and then copy and paste the following code into it. As you can see, here we created the class as static and hence the GetWordCount as static and provide the first parameter as the string class name so that we can call this method on the String class object.
namespace ExtensionMethodsDemo { public static class StringExtension { public static int GetWordCount(this string inputstring) { if (!string.IsNullOrEmpty(inputstring)) { string[] strArray = inputstring.Split(' '); return strArray.Count(); } else { return 0; } } } }
Once you have created the extension method, now you can use that method on the String class object. So, modify the Main method of the Program class as shown below.
namespace ExtensionMethodsDemo { class Program { static void Main(string[] args) { string myWord = "Welcome to Dotnet Tutorials Extension Methods Article"; int wordCount = myWord.GetWordCount(); Console.WriteLine("string : " + myWord); Console.WriteLine("Count : " + wordCount); Console.Read(); } } }
That’s it. Now run the application and you should get the output as expected as shown in the below image.
In the next article, I am going to discuss C# 7 new Features with examples. Here, in this article, I try to explain Extension Methods in C# with examples. I hope this article will help you with your need. I would like to have your feedback. Please post your feedback, question, or comments about this Extension Methods in C# with examples article.
4 thoughts on “Extension Methods in C#”
its amazing……….
i appreciate it
Hi, Although Content is really really good. It would be nice to have a official definition of each topic described.
nice
nice
|
https://dotnettutorials.net/lesson/extension-methods-csharp/
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
It's been rumoured that Lincoln D. Durey said: > > Linas, > we are happy users of gnucash 1.4.9. But we had a system crash while a > gnucash was running (most likely not gnucash's fault). As this session > involved about 2-3 hours of hard work, we are very interested in any > available recovery options. > > We have our previos data file (gc_emp), which is exactly in step with > the state of our accounts before the data entry began, and we have the .log > file (gc_emp.20010206224731.log) time stamped just moments before the crash. > there was no .xac file generated at crash time. > > Is there a way to apply the log file (which I can see has the data we > entered) to the original file, and arrive at a nice new gnucash file with all > our updates? Either manually, or with a nice front end. Do you know perl? As of about an hour ago, no one had bothered to do this. So I 'just did it;'. Its dirty, it doesn't check for errors, its minimal. It was harder to create than it should have been; the gnucash engine doesn't have a 'get account by name' function, and it doesn't grok dates quite the way it should. Backup your data, the script may mangle things. You will need to double-check. --linas p.s. I'll check this into cvs under the name 'gnc-restore.pl' or something like that. Maybe it'll be in 1.4.11 as an undocumented feature. #! /usr/bin/perl # # restore gnucash transactions from a gnucash log file. # # Warning! this script probably does the wrong thing, # and has never been tested!! # It will probably destroy your data! Use at your own risk! # # set the path below to where your gnucash.pm is located use lib '/usr/local/gnucash-1.4/lib/gnucash/perl'; use lib '/usr/local/gnucash-1.4/share/gnucash/perl'; use gnucash; # -------------------------------------------------- # @account_list = &account_flatlist ($account_group); # This routine accepts a pointer to a group, returns # a flat list of all of the children in the group. sub account_flatlist { my $grp = $_[0]; my $naccts = gnucash::xaccGroupGetNumAccounts ($grp); my $n; my (@acctlist, @childlist); my $children; foreach $n (0..$naccts-1) { $acct = gnucash::xaccGroupGetAccount ($grp, $n); push (@acctlist, $acct); $children = gnucash::xaccAccountGetChildren ($acct); if ($children) { @childlist = &account_flatlist ($children); push (@acctlist, @childlist); } } return (@acctlist); } # -------------------------------------------------- # If the gnucash engine had a 'get account by name' # utility function, then we wouldn't need this and the above mess. sub get_account_by_name { my $accname = $_[0]; my $name; # loop over the accounts, look for stock and mutual funds. foreach $acct (@acctlist) { $name = gnucash::xaccAccountGetName ($acct); if ($name eq $accname) { $found = $acct; break; } } return ($found); } # -------------------------------------------------- die "Usage: cat <logfile> | $0 <gnucash-filename>" if $#ARGV < 0; # open the file print "Opening file $ARGV[0]\n"; $sess = gnucash::xaccMallocSession (); $grp = gnucash::xaccSessionBeginFile ($sess,$ARGV[0]); die "failed to read file $ARGV[0], maybe its locked? " if (! $grp); # get a flat list of accounts in the file @acctlist = &account_flatlist ($grp); $got_data = 0; $nsplit = 0; while (<STDIN>) { # start of transaction if (/^===== START/) { $nsplit = 0; next; } # end of transaction if (/^===== END/) { if ($got_data == 1) { gnucash::xaccTransCommitEdit ($trans); } $got_data = 0; next; } # ignore 'begin' lines if (/^B/) { next; } if (/^D/) { print "WARNING: deletes not handled, you will have to manually delete\n"; next; } # ignore any line that's not a 'commit' if (!/^C/) { next; } chop; # get journal entry ($mod, $id, $time_now, $date_entered, $date_posted, $account, $num, $description, $memo, $action, $reconciled, $amount, $price, $date_reconciled) = split (/ /); # parse amount & price # gnucash-1.4 : float pt, gnucash1.5 : ratio ($anum, $adeno) = split (/\//, $amount); if (0 != $adeno) { $amount = $anum / $adeno; } ($pnum, $pdeno) = split (/\//, $price); if (0 != $pdeno) { $price = $pnum / $pdeno; # value, not price ... if (0 != $amount) { $price = $price/$amount; } } $dyear = int($date_posted/10000000000); $dmonth = int($date_posted/100000000) - 100*$dyear; $dday = int($date_posted/1000000) - 10000*$dyear - 100*$dmonth; $dpost = $dmonth . "/" . $dday . "/" . $dyear; # do a 'commit' if ($mod == C) { print "restoring '$account' '$description' for $pric and '$quant'\n"; print "date is $dpost $date_posted\n"; if ($got_data == 0) { $trans = gnucash::xaccMallocTransaction(); gnucash::xaccTransBeginEdit( $trans, 1); $got_data = 1; } gnucash::xaccTransSetDescription( $trans, $description); gnucash::xaccTransSetDateStr ($trans, $dpost); gnucash::xaccTransSetNum ($trans, $num); if ($nsplit == 0) { $split = gnucash::xaccTransGetSplit ($trans, $nsplit); } else { $split = gnucash::xaccMallocSplit(); gnucash::xaccTransAppendSplit($trans, $split); } gnucash::xaccSplitSetAction ($split, $action); gnucash::xaccSplitSetMemo ($split, $memo); gnucash::xaccSplitSetReconcile ($split, $reconciled); # hack alert -- fixme: the reconcile date is not set ... # need to convert date_reconciled to 'seconds' ... # gnucash::xaccSplitSetDateReconciled ($split, $date_reconciled); gnucash::xaccSplitSetSharePriceAndAmount($split, $price, $amount); $acct = get_account_by_name ($account); gnucash::xaccAccountBeginEdit ($acct, 1); gnucash::xaccAccountInsertSplit ($acct, $split); gnucash::xaccAccountCommitEdit ($acct); $nsplit ++; } } gnucash::xaccSessionSave ($sess); gnucash::xaccSessionEnd ($sess); _______________________________________________ gnucash-devel mailing list [EMAIL PROTECTED]
- gnucash crash!, ? recovery options ? Lincoln D. Durey
- linas
|
https://www.mail-archive.com/gnucash-devel@gnucash.org/msg07878.html
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
SFML community forums
Help => Network => Topic started by: TestZombie on April 03, 2016, 01:38:36 am
Title:
Sockets cannont be a vector
Post by:
TestZombie
on
April 03, 2016, 01:38:36 am
I dont know what to do
(click to show/hide)
#include <SFML\Network.hpp>
#include <SFML\System.hpp>
void main()
{
std::vector<sf::TcpSocket> socket;
socket.resize(1);
}
this throws this error in vs2015
Severity Code Description Project File Line Suppression State
Error C2280 'sf::TcpSocket::TcpSocket(const sf::TcpSocket &)': attempting to reference a deleted function Test c:\program files (x86)\microsoft visual studio 14.0\vc\include\xmemory0 655
Title:
Re: Sockets cannont be a vector
Post by:
Ixrec
on
April 03, 2016, 10:25:13 am
That error message is telling you that sf::TcpSocket lacks a copy constructor. Many SFML classes deliberately lack copy constructors because semantically it just doesn't make any sense to copy things like keyboards or mice or network connections (see the sf::NonCopyable documentation ( for a list of all the other non-copyable SFML classes).
The issue you've run into, which is a general C++ thing and not unique to SFML, is that std::vector can only store copyable types (or in C++11, moveable types). This is correct, because many of the typical std::vector operations require copying (or moving) the contained objects, so it would be wrong to let you compile this.
Depending on what you want to do with sockets, you can either use a raw array as your container, or you can have your container hold smart pointers to sf::TcpSockets instead of the sf::TcpSockets themselves.
Title:
Re: Sockets cannont be a vector
Post by:
TestZombie
on
April 04, 2016, 05:03:07 am
Thanks!
Title:
Re: Sockets cannont be a vector
Post by:
namosca
on
April 05, 2016, 10:50:32 pm
Hey!
I dont know if you want a vector just because you want to store your objects in an organized way, or just because you want to resize the vector, but an std::list will work fine assuming your most wanted need is to store your objects in an organized way.
It just worked for me:
std::list<sf::TcpSocket> connectionList;
connectionList.emplace_back();// This creates a new socket, directly inside the list
For resizing, this looks dangerous to me to resize without knowing what will happen to the clients connected to your sockets, but this function is also avaiable in list. You can watch all a list can do here on this link:
Have a nice time!
Title:
Re: Sockets cannont be a vector
Post by:
Ixrec
on
April 06, 2016, 09:03:48 pm
For completeness, what namosca said is true of any "node-based container" (i.e. a container implemented with "nodes" that hold one item alongside pointers to other nodes). That includes std::list, std::map, std::set, and so on. These containers do not require the items they contain to be copyable because they
never
copy an item after it's been inserted, no matter what methods you call on them, which is why they work with non-copyable classes like sf::TcpSocket.
The downside of a node-based container is that it can't guarantee contiguous memory like a std::array or std::vector can, and there's the storage and runtime costs of chasing a bunch of pointers. But for many purposes you won't even notice that. It was silly of me not to mention this option in the last post.
SMF 2.0.18
|
SMF © 2021
,
Simple Machines
Simple Audio Video Embedder
|
https://en.sfml-dev.org/forums/index.php?action=printpage;topic=20079.0
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Hello! I didn't read the rules or anything, so I don't know if I'm posting in the right place, but I need help with a program as soon as possible.
I have to answer this question: Find the first ORF in a sequence from a position (findORF(DNA, position)). It must return the first ORF found.
PS: ORF(Open Reading Frame) is a sequence of DNA that is multiple of 3.
PS2: The ORF begins with "ATA" and can either end with "AGA" or "AGG".
The program is:
def find(word, letter, position): while position < len(word): position2 = position + len(letter) if word[position:position2] == letter: return position position = position + 1 return -1 def findORF(DNA): beginning = find(DNA,"ATA", 0) stop1 = find(DNA,"AGA", beginning + 3) stop2 = find(DNA,"AGG", beginning + 3) if len(DNA[beginning:stop1])%3 == 0: return DNA[beginning:stop1 + 3] if len(DNA[beginning:stop1])%3 != 0: if len(DNA[beginning:stop2])%3 == 0: return DNA[beginning:stop2 + 3] else: return "It's not an ORF." print findORF("ATACCCCGCGCGCGCATAAGCGCGAGACGCGCGCGCGCGGAGG") print findORF("ATASDFGHJKLMAAGA") print findORF("AFFATAAAGAAAAGG") print findORF("KKKKKSD")
I'm having a problem with print findORF("ATASDFGHJKLMAAGA") and print findORF("KKKKKSD"). They should return "It's not an ORF.", but it's not working. Can someone help me, please?
|
https://www.daniweb.com/programming/software-development/threads/386276/problem-with-program-using-find
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
SoContextHandler.3coin3 - Man Page
The SoContextHandler class is for now to be treated as an internal class.
Synopsis
#include <Inventor/misc/SoContextHandler.h>
Public Types
typedef void ContextDestructionCB(uint32_t contextid, void *userdata)
Static Public Member Functions
static void destructingContext (uint32_t contextid)
static void addContextDestructionCallback (ContextDestructionCB *func, void *closure)
static void removeContextDestructionCallback (ContextDestructionCB *func, void *closure)
Detailed Description
The SoContextHandler class is for now to be treated as an internal class.
Member Function Documentation
void SoContextHandler::destructingContext (uint32_t contextid) [static]
This method must be called by client code which destructs a context, to guarantee that there are no memory leaks upon context destruction.
This will take care of correctly freeing context-bound resources, like OpenGL texture objects and display lists.
Before calling this function, the context must be made current.
Note that if you are using one of the standard GUI-binding libraries from Kongsberg Oil & Gas Technologies, this is taken care of automatically for contexts for canvases set up by SoQt, SoWin, etc.
void SoContextHandler::addContextDestructionCallback (ContextDestructionCB * func, void * closure) [static]
Add a callback which will be called every time a GL context is destructed. The callback should delete all GL resources tied to that context.
All nodes/classes that allocate GL resources should set up a callback like this. Add the callback in the constructor of the node/class, and remove it in the destructor.
- See also
removeContextDestructionCallback()
void SoContextHandler::removeContextDestructionCallback (ContextDestructionCB * func, void * closure) [static]
Remove a context destruction callback.
- See also
addContextDestructionCallback()
Author
Generated automatically by Doxygen for Coin from the source code.
Referenced By
The man page SoContextHandler.3coin2(3) is an alias of SoContextHandler.3coin3(3).
|
https://www.mankier.com/3/SoContextHandler.3coin3
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
I have the following code segment in a function, and that function is
called often. I have the value of SDLK_LEFT stored as an SDLKey in the
"keys" structure. As a test, whenever the left arrow key is pressed, I
have it print “rotating left!”. This is constantly being called in a
loop, yet if I press and hold the left arrow key down, it only prints
"rotating left!" once. What’s wrong below? (I’d like it to constantly
print “rotating left!”)
SDL_Event event; SDL_keysym keysym; SDL_PollEvent(&event); switch(event.type) { case SDL_QUIT: return (0); break; case SDL_KEYDOWN: keysym = event.key.keysym; if (keysym.sym == SDLK_ESCAPE) { return (0); } if (keysym.sym == keys.rotate_left) { fprintf(stdout, "rotating left!\n"); fprintf(stdout, "keysym.sym = %d\n", keysym.sym); fprintf(stdout, "keys.rotate_left = %d\n", keys.rotate_left); break; } break; default: break; } return (1);
– chris (@Christopher_Thielen)
|
https://discourse.libsdl.org/t/sdl-input-basic-question/7118
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Changes to Qt Positioning#
Migrate Qt Positioning Positioning, and provide guidance to handle them.
Breaking public API changes#
This section contains information about API changes that break source compatibility.
Rename QGeoPolygon::path()#
The
QGeoPolygon::path() and
QGeoPolygon::setPath() methods are renamed to
perimeter() and
setPerimeter() respectively. On the QML side the
perimeter property can be used without any changes.
- Use
QGeoShape
- for
QGeoLocation
bounding area#
The
QGeoLocation class and its Location QML counterpart are updated to use
QGeoShape instead of
QGeoRectangle for a bounding area.
C++#
The
QGeoLocation::boundingBox() and
QGeoLocation::setBoundingBox() are replaced by
boundingShape() and
setBoundingShape() respectively. A
QGeoShape object is now used as an underlying data storage.
QML#
The
QGeoLocation::boundingBox property is replaced by
boundingShape . This property is available since QtPositioning 6.2, so make sure to update the import version in the QML files.
import QtPositioning 6.2
Remove QGeoShape::extendShape()#
The
QGeoShape::extendShape() method was deprecated in Qt 5.9 and finally removed in Qt 6. Use
extendRectangle() and
extendCircle() if you need this functionality for these classes.
Rename signal error to errorOccurred#
In Qt 5 multiple Qt Positioning classes had the
error() signal, which was clashing with the
error() method. In Qt 6 we renamed these signals to
errorOccurred(). Specifically:
-
QGeoAreaMonitorSource::error()is renamed to
errorOccurred().
-
QGeoPositionInfoSource::error()is renamed to
errorOccurred().
-
QGeoSatelliteInfoSource::error()is renamed to
errorOccurred().
Remove update timeout signals#
In Qt 5
QGeoPositionInfoSource::updateTimeout() and
QGeoSatelliteInfoSource::requestTimeout() signals were used to notify about the cases when the current position or satellite information could not be retrieved within specified timeout. These signals were removed in Qt 6. The
errorOccurred() signals with the new error types are used instead. Specifically:
-
QGeoPositionInfoSourceuses an
errorOccurred()signal with a new
UpdateTimeoutErrorerror code.
-
QGeoSatelliteInfoSourceuses an
errorOccurred()signal with a new
UpdateTimeoutErrorerror code.
Same changes apply to PositionSource QML object. The
PositionSource::updateTimeout() signal is removed. PositionSource::sourceError property with a
PositionSource.UpdateTimeoutError is used instead.
Redesign NMEA support#
In Qt 5 we had a serialnmea positioning plugin and a
nmeaSource property in PositionSource object.
The plugin provided access to NMEA streams via serial port, while the QML object was responsible for reading NMEA stream from TCP socket or local file.
In Qt 6 we joined all these features in the plugin, which is now renamed to nmea. It is now capable of working with all three NMEA data sources: serial port, TCP socket and local file. See plugin description for more details.
The
nmeaSource property of PositionSource object is now removed.
Other API changes#
This section contains API improvements that do not break source compatibility. However they might have an impact on the application logic, so it is still useful to know about them.
Reset errors properly#
In Qt 5 the errors for
QGeoAreaMonitorSource ,
QGeoPositionInfoSource and
QGeoSatelliteInfoSource classes were never reset. This behavior is not logical, as calling
startUpdates(),
startMonitoring() or
requestUpdates() on one of these classes or their subclasses effectively means starting a new work sessions, which means that we should not care about previous errors. Since Qt 6 we reset the error to
NoError once one of the aforementioned methods is called.
- Add
streetNumber
The
QGeoAddress class is extended with
streetNumber property, which holds the information about street number, building name, or anything else that might be used to distinguish one address from another. Use
streetNumber() and
setStreetNumber() to access this property from C++ code.
The
street now holds only the street name.
Same applies to Address QML counterpart. The Address::street property is now used only for street name, while the Address::streetNumber property is used for other important address details.
- Add timeout argument to
update()
The
timeout is specified in milliseconds. If the
timeout is zero (the default value), it defaults to a reasonable timeout period as appropriate for the source.
- Refactor
QGeoSatelliteInfo
- ,
QGeoPositionInfo
- and
QGeoAreaMonitorInfo
classes#
These classes now use
QExplicitlySharedDataPointer in their implementation. It means that the classes implement copy-on-write. It makes them cheap to copy, so that they can be passed by value.
Another improvement is the addition of support for the efficient move operations.
Changes in Qt Positioning plugin implementation#
This section provides information about the changes in plugin interface.
In Qt 5 for we had two versions of plugin interface:
-
QGeoPositionInfoSourceFactorywhich provided the basic features.
-
QGeoPositionInfoSourceFactoryV2which extended the base class with the possibility to provide custom parameters for the created objects.
In Qt 6 we merged these two implementations into one, leaving only the
QGeoPositionInfoSourceFactory class. Its methods now allow to pass custom parameters.
Note
The interface identifier is updated to reflect the major version update. Use
"org.qt-project.qt.position.sourcefactory/6.0" in your Qt Positioning plugins.
Here is an example of plugin class declaration:
class MyPlugin : public QObject, public QGeoPositionInfoSourceFactory { Q_OBJECT Q_PLUGIN_METADATA(IID "org.qt-project.qt.position.sourcefactory/6.0" FILE "plugin.json") Q_INTERFACES(QGeoPositionInfoSourceFactory) public: QGeoPositionInfoSource *positionInfoSource(QObject *parent, const QVariantMap ¶meters) override; QGeoSatelliteInfoSource *satelliteInfoSource(QObject *parent, const QVariantMap ¶meters) override; QGeoAreaMonitorSource *areaMonitor(QObject *parent, const QVariantMap ¶meters) override; };
|
https://doc.qt.io/qtforpython/overviews/qtpositioning-changes-qt6.html
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Custom Password Hashing (Password Encryptors)
Overview
There are times when you have a custom password hash that you want to import into FusionAuth. FusionAuth supports a number of password hashing schemes but you can write a custom plugin if you have hashed your passwords using a different scheme.
You can use your custom password hashing scheme going forward, or you can rehash your passwords. You’d use the former strategy if you wanted to use a strong, unsupported password hashing scheme such as Argon2. You’d use the latter strategy if you are migrating from a system with a weaker hashing algorithm.
This code uses the words 'encryption' and 'encryptor' for backwards compatibility, but what it is really doing is hashing the password.
Write the Password Encryptor Class
The main plugin interface in FusionAuth is the Password Encryptors interface. This allows you to write a custom password hashing scheme. A custom password hashing scheme is useful when you import users from an existing database into FusionAuth so that the users don’t need to reset their passwords to login into your applications.
To write a Password Encryptor, you must first implement the
io.fusionauth.plugin.spi.security.PasswordEncryptor interface. Here’s an example Password Encryptor.
/* * Copyright (c) 2019, FusionAuth,.mycompany.fusionauth.plugins; import javax.crypto.Mac; import javax.crypto.SecretKey; import javax.crypto.SecretKeyFactory; import javax.crypto.spec.PBEKeySpec; import javax.crypto.spec.SecretKeySpec; import java.nio.charset.StandardCharsets; import java.security.InvalidKeyException; import java.security.NoSuchAlgorithmException; import java.security.spec.InvalidKeySpecException; import java.security.spec.KeySpec; import com.mycompany.fusionauth.util.HexTools; import io.fusionauth.plugin.spi.security.PasswordEncryptor; /** * This is an example of a PBKDF2 HMAC SHA1 Salted hashing algorithm. * * <p> * This code is provided to assist in your deployment and management of FusionAuth. Use of this * software is not covered under the FusionAuth license agreement and is provided "as is" without * warranty. * </p> * * @author Daniel DeGroff */ public class ExamplePBDKF2HMACSHA1PasswordEncryptor implements PasswordEncryptor { private final int keyLength; public ExamplePBDKF2HMACSHA1PasswordEncryptor() { // Default key length is 512 bits this.keyLength = 64; } @Override public int defaultFactor() { return 10_000; } @Override public String encrypt(String password, String salt, int factor) { if (factor <= 0) { throw new IllegalArgumentException("Invalid factor value [" + factor + "]"); } SecretKeyFactory keyFactory; try { keyFactory = SecretKeyFactory.getInstance("PBKDF2WithHmacSHA1"); } catch (NoSuchAlgorithmException e) { throw new IllegalStateException("No such algorithm [PBKDF2WithHmacSHA1]"); } KeySpec keySpec = new PBEKeySpec(password.toCharArray(), salt.getBytes(), factor, keyLength * 8); SecretKey secret; try { secret = keyFactory.generateSecret(keySpec); } catch (InvalidKeySpecException e) { throw new IllegalArgumentException("Could not generate secret key for algorithm [PBKDF2WithHmacSHA1]"); } SecretKeySpec secretKeySpec = new SecretKeySpec(secret.getEncoded(), "HmacSHA1"); Mac mac; try { mac = Mac.getInstance("HmacSHA1"); } catch (NoSuchAlgorithmException e) { throw new IllegalStateException("No such algorithm [HmacSHA1]"); } try { mac.init(secretKeySpec); byte[] hashedPassword = mac.doFinal(password.getBytes(StandardCharsets.UTF_8)); return HexTools.encode(hashedPassword); } catch (InvalidKeyException e) { throw new IllegalArgumentException("Invalid key used to initialize HmacSHA1"); } } }
Adding the Guice Bindings
To complete the main plugin code (before we write a unit test), you need to add Guice binding for your new Password Encryptor. Password Encryptors use Guice Multibindings via Map. Here is an example of binding our new Password Encryptor so that FusionAuth can use it for users.
import com.google.inject.AbstractModule; import com.google.inject.multibindings.MapBinder; import com.mycompany.fusionauth.plugins.ExamplePBDKF2HMACSHA1PasswordEncryptor; import io.fusionauth.plugin.spi.PluginModule; import io.fusionauth.plugin.spi.security.PasswordEncryptor; @PluginModule public class MyExampleFusionAuthPluginModule extends AbstractModule { @Override protected void configure() { MapBinder<String, PasswordEncryptor> passwordEncryptorMapBinder = MapBinder.newMapBinder(binder(), String.class, PasswordEncryptor.class); passwordEncryptorMapBinder.addBinding("example-salted-pbkdf2-hmac-sha1-10000").to(ExamplePBDKF2HMACSHA1PasswordEncryptor.class); } }
You can see that we have bound the Password Encryptor under the name
example-salted-pbkdf2-hmac-sha1-10000. This is the same name that you will use when creating users via the User API.
Writing a Unit Test
You’ll probably want to write some tests to ensure that your new Password Encryptor is working properly. Our example uses TestNG, but you can use JUnit or another framework if you prefer. Here’s a simple unit test for our Password Encryptor:
package com.mycompany.fusionauth.plugins; import io.fusionauth.plugin.spi.security.PasswordEncryptor; import org.testng.annotations.DataProvider; import org.testng.annotations.Test; import static org.testng.Assert.assertEquals; /** * @author Daniel DeGroff */ public class ExamplePBDKF2HMACSHA1PasswordEncryptorTest { @Test(dataProvider = "hashes") public void encrypt(String password, String salt, String hash) { PasswordEncryptor encryptor = new ExamplePBDKF2HMACSHA1PasswordEncryptor(); assertEquals(encryptor.encrypt(password, salt, 10_000), hash); } @DataProvider(name = "hashes") public Object[][] hashes() { return new Object[][]{ {"password123", "1484161696d0ca62390273b98846f49671cecd78", "4761D3392092F9CA6036B53DC92C6D7F3D597576"}, {"password123", "ea95629c7954d73ea670f07a798e9fd4ab907593", "9480AD9A59CB5053B832BA5E731AFCD1F78068EC"}, }; } }
To run the tests using the Java Maven build tool, run the following command.
mvn test
Integration Test
After you have completed your plugin, the unit test and installed the plugin into a running FusionAuth installation, you can test it by hitting the User API and creating a test user. Here’s an example JSON request that uses the new Password Encryptor:
{ "user": { "id": "00000000-0000-0000-0000-000000000001", "active": true, "email": "test0@fusionauth.io", "encryptionScheme": "example-salted-pbkdf2-hmac-sha1-10000", "password": "password", "username": "username0", "timezone": "Denver", "data": { "attr1": "value1", "attr2": ["value2", "value3"] }, "preferredLanguages": ["en", "fr"], "registrations": [ { "applicationId": "00000000-0000-0000-0000-000000000042", "data": { "attr3": "value3", "attr4": ["value4", "value5"] }, "id": "00000000-0000-0000-0000-000000000003", "preferredLanguages": ["de"], "roles": ["role 1"], "username": "username0" } ] } }
Notice that we’ve passed in the
encryptionScheme property with a value of
example-salted-pbkdf2-hmac-sha1-10000. This will instruct FusionAuth to use your newly written Password Encryptor.
Sample Code
A sample plugin project is available. If you are looking to write your own custom password hashing algorithm, this project is a good starting point..
Currently rehashing a password when it is changed is not supported. Here’s the tracking issue for this feature.
Feedback
How helpful was this page?
See a problem?
File an issue in our docs repo
|
https://fusionauth.io/docs/v1/tech/plugins/custom-password-hashing
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
How to Print Colored Text in Python
In this article, we will learn to print colored text in Python. We will use some built-in modules and libraries and some custom codes as well. Let's first have a quick look over how Python represents color codes.
In the Python programming language, text can be represented using different colors. There are very simple to use Python libraries for colors and formatting in the terminal. The programmer gets better responses by printing colored texts.
Let's see some useful examples to color text in Python.
Print Color Text using colorma Module
We can use the built-in
colorama module of Python to print colorful text. It is a cross-platform printing module. In this, colored text can be done using
Colorama’s constant shorthand for
ANSI escape sequences. Just import from coloroma module and get your desired output.
import colorama from colorama import Fore print(Fore.RED + 'This text is red in color')
This text is red in color.
import sys from termcolor import colored, cprint text = colored('Hello, World!', 'red', attrs=['reverse', 'blink']) print(text)
Hello, World!
Print Color Text using ANSI Code in Python
We can use ANSI code style to make your text more readable and creative, you can use ANSI escape codes to change the color of the text output in the python program. A good use case for this is to highlight errors. The escape codes are entered right into the print statement.
print("\033[1;32m This text is Bright Green \n")
This text is Bright Green
The above ANSI escape code will set the text color to bright green. The format is;
- \033[ = Escape code, this is always the same
- 1 = Style, 1 for normal.
- 32 = Text colour, 32 for bright green.
Print Color Text using the colored module
We can use the colored module and its functions to color text in Python. It is a library that can be used after installing by using the pip command. So, first, install it and then import it into your python script to highlight text colors.
from colored import fg print ('%s Hello World !!! %s' % (fg(1), attr(0)))
Hello World !!!
Example 2
We can pass the name of the color into the fg() function as well. See, it prints text in blue color as we passed blue as value.
from colored import fg color = fg('blue') print (color + 'Hello World !!!')
Hello World !!!
These are the different ways in which you can print your text in different colors. You can also add different styles to your text, different background colors to your text as well.
Conclusion
In this article, we learned to color text and print colored background as well by using several built-in functions such as
coloroma module,
termcolor,
colored module etc. We used some custom codes as well. For example, we used different colors and text to highlight and print colored text.
|
https://www.studytonight.com/python-howtos/how-to-print-colored-text-in-python
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
10. Controlling 1-wire devices¶
The 1-wire bus is a serial bus that uses just a single wire for communication (in addition to wires for ground and power). The DS18B20 temperature sensor is a very popular 1-wire device, and here we show how to use the onewire module to read from such a device.
For the following code to work you need to have at least one DS18S20 or DS18B20 temperature sensor with its data line connected to GPIO12. You must also power the sensors and connect a 4.7k Ohm resistor between the data pin and the power pin.
import time import machine import onewire, ds18x20 # the device is on GPIO12 dat = machine.Pin(12) # create the onewire object ds = ds18x20.DS18X20(onewire.OneWire(dat)) # scan for devices on the bus roms = ds.scan() print('found devices:', roms) # loop 10 times and print all temperatures for i in range(10): print('temperatures:', end=' ') ds.convert_temp() time.sleep_ms(750) for rom in roms: print(ds.read_temp(rom), end=' ') print()
Note that you must execute the
convert_temp() function to initiate a
temperature reading, then wait at least 750ms before reading the value.
|
https://docs.micropython.org/en/latest/esp8266/tutorial/onewire.html
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Help with photos module and http post?
Hello Everyone,
I am currently messing around with the Kairos API and am trying to use Pythonista to take a new photo on my iPad and then upload that photo to the Kairos enroll API. I am able to get this to work fine with a URL image but for the life of me I am unable to get this to work by taking a photo with the photos module. From my understanding the photos module returns a PIL Image and I think I need to base64 encode that before uploading to the Kairos API??
Here is my code without using the photos module:
#import photos import requests #img = photos.capture_image() url = "" values = """ { "image": "URL for image", "subject_id": "test", "gallery_name": "test" } """ headers = { 'Content-Type': 'application/json', 'app_id': '********', 'app_key': '************************' } request = requests.post(url, data=values, headers=headers) response = request.content print(response)
Im hoping that someone can help me out by showing me what I need to do to be able to accomplish this task. Any help is greatly appreciated.
Thank you in advance,
Colin
A common approach is to use BytesIO:
import io with io.BytesIO() as output: img.save(output) contents = output.getvalue()
You might need to then pass contents through base64.b64encode
Thanks jonB
Its interesting because I have tried that as well and am not having good results. Maybe I just dont understand how to use io well enough yet...
@inzel, what do the Kairos API docs say about uploaded image formats, sizes etc., any gotchas there?
Here is the only real info I have from their docs:
POST /enroll
Takes a photo, finds the faces within it, and stores the faces into a gallery you create.
To enroll someone into a gallery, all you need to do is submit a JPG or PNG photo. You can submit the photo either as a publicly accessible URL, Base64 encoded photo or as a file upload.
-inzel
@inzel, ok, thanks. Can you share the relevant piece of the actual code where you are trying to upload the image?
The code is basically what I put in my original post. I have tried many many different things but havent been saving my code each time as I havent fully understood the modules. I guess I should just comment it all out in the future instead of deleting it. My main question is how I could take the image obtained from photos.capture_image() and then base64 encode it and send it in POST data.
If you have any thoughts on that I would love to hear about it :) Or if you have any thoughts on another way of accomplising the same task that would be great as well.
Thank you in advance!
-inzel
so, did you try the bytesio code, followed by b64encode? Post that completed attempt, then we can work from there. You might need to pass that through .decode('ascii') after that. you might need json= instead of data=.
I found a complete python api:
which uses those two other modifications I mentioned (they read the file directly, but the important thing is just getting the bytes out into b64encode)
@inzel Perhaps reading here could help.
See pyimgur/init.py upload_image
This module allows to post an image to imgur, using base64...
I am trying to use bytesio and base64 but cant get past the img.save line:
import photos import requests import io import base64 #img = photos.capture_image() with io.BytesIO() as output: img = photos.capture_image() img.save(output) contents = output.getvalue() image = base64.b64encode(contents) url = "``` 10, in <module> img.save(output) 1697, in save format = EXTENSION[ext] KeyError Appears that the save function requires me to have an extension. I then tried another angle and am getting a new error:
img = photos.capture_image()
contents = io.BytesIO(img)
binary_data = contents.getvalue()
image = base64.b64encode(binary_data)``` 8, in <module>
contents = io.BytesIO(img)
TypeError: 'Image' does not have the buffer interface
I tried that as well as:
img.save(output, format = ‘JPG’)
And I get errors each time:
img.save(output,'
img.save(output, format =, format = '
Maybe my syntax is wrong?
Ah perfect. That part works now.
with io.BytesIO() as output: img = photos.capture_image() img.save(output,'JPEG') contents = output.getvalue() image = base64.b64encode(contents)
I feel we are very close. I believe the final step now is determining the proper syntax to POST the payload. This would be much easier if I was able to use wireshark or another packet capture tool to see how the POST looks as its being sent but I cant do that on my iPad. This is what I am trying for my payload line but the syntax is incorrect:
payload = '{'"image": + image + "',"' + '\n' + '"subject_id": "test" + ","' + '\n' + '"gallery_name": "test"'}'
I need it to look like this when its sent:
{
“image”: image,
“subject_id”: “test”,
“gallery_name”:”test”
}
My apologies for all the new guy questions. I really appreciate all the help you guys have provided me so far. Im learning...
I got it to work!
import photos import requests import io import base64 import json with io.BytesIO() as output: img = photos.capture_image() img.save(output,'JPEG') contents = output.getvalue() image = base64.b64encode(contents) url = "" values = { 'image': image, 'subject_id': 'test', 'gallery_name': 'test' } headers = { 'Content-Type': 'application/json', 'app_id': '*********', 'app_key': '*************************' } request = requests.post(url, data=json.dumps(values), headers=headers) response = request.content print(response)``` Thanks everyone!
I decided to clean it up a bit and use some functions. I havent added my comments yet but here is a fully working solution:
import photos import requests import io import base64 import json img = photos.capture_image() def getPhoto(): with io.BytesIO() as output: img.save(output, 'JPEG') contents = output.getvalue() image = base64.b64encode(contents) return image def enrollPhoto(): subject_id = raw_input("Hello, What is your name: ? ") print("Thank you " + subject_id + "." + " Analyzing...") image = getPhoto() url = "" values = { 'image': image, 'subject_id': subject_id, 'gallery_name': subject_id } headers = { 'Content-Type': 'application/json', 'app_id': '***********', 'app_key': '****************************' } r = requests.post(url, data=json.dumps(values), headers = headers) parsed_json = json.loads(r.content) attr = parsed_json['images'][0]['attributes'] img.show() print(json.dumps(attr, indent=2)) enrollPhoto()
Just need to put in your actual app_id and app_key. Should work right away. My next step will be getting a simple interface built and then comparing the pic of the user to existing pics of the same user to determine whether or not they gain access. Something like that anyways. Turned out to be a fun endeavor!
fwiw,
requestslets you use req.json() rather than json.loads(req.contents). also, you can use json=values instead of json.dumps(values) in the request.
|
https://forum.omz-software.com/topic/5088/help-with-photos-module-and-http-post/?page=1
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
Details
Bug
- Status: In Progress (View Workflow)
Minor
- Resolution: Unresolved
-
-
- Jenkins 2.65
Some plugins:
Pipeline 2.5
Pipeline: Groovy 2.35
Description
When using groovy script in the Pipeline Plugin, sorting a list using closure or a custom comparator does not work anymore.
Steps to reproduce:
- create new item of type Pipeline
- In the Pipeline script add the following code
#!groovy assert ["aa","bb","cc"] == ["aa","cc","bb"].sort { a, b -> a <=> b }
- Click Save
- Click Build Now
- Check the failed build:
[Pipeline] End of Pipeline
hudson.remoting.ProxyException: Assertion failed:
assert ["aa","bb","cc"] == ["aa","cc","bb"].sort { a, b -> a <=> b }
at org.codehaus.groovy.runtime.InvokerHelper.assertFailed(InvokerHelper.java:404)
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.assertFailed(ScriptBytecodeAdapter.java:650)
at com.cloudbees.groovy.cps.impl.AssertBlock$ContinuationImpl.fail(AssertBlock.java:47).cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
at com.cloudbees.groovy.cps.impl.ConstantBlock.eval(ConstantBlock.java:21)
at com.cloudbees.groovy.cps.Next.step(Next.java:83)
at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:173)
at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:162):162)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:19)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:35)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:32)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.GroovySandbox.runInSandbox(GroovySandbox.java:108)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:32)
at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:174)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:330)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$100(CpsThreadGroup.java:82)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:242)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:230)
at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:64)
at java.util.concurrent.FutureTask.run(FutureTask.java:266))
Finished: FAILURE
Expected result is a sorted list:
[aa, bb, cc]
Attachments
Issue Links
- blocks
JENKINS-26481 Mishandling of binary methods accepting Closure
- Resolved
- relates to
JENKINS-50343 ComposedClosure not working with List.each
- Open
Activity
Thanks! adding the annotation helped.
I was also able to change some of the code back to .eachLine {..} as well. I'm pretty sure it worked up until our Jenkins was upgraded today.
If it did, it was by accident. See
JENKINS-26481.
No, it is a defect—just one we do not have a convenient fix for yet. (If you are interested, the issue is that the sort methods require CPS translation of a utility class, and the existing Translator only handles method bodies so far.)
Hi,
I also
have had a similar issue. I am trying to use a shared global library and I have used different methods to sort a List of String's given as an argument, annotated with NonCPS or not there always was a problem:
- the list would not be sorted
- calling the `sort` method of the third party Library with the protoype List sort(SomeInterface[]) always returned a String instead of a List
- the method call from the pipeline script would silently fail and just return null
I tried using Collections.sort directly with a custom Comparator, and also tried to extract the sorting logic into a private NonCPS method to no avail.
Since I had to implement SomeInterface in a wrapper class to make use of the 3rd party Comparator the solution was to annotate any method of this class with {{NonCPS,}} too. Simply using an anonymous class inside a NonCPS annotated method was not good enough...
All in all I needed 90 iterations until I found the problem... for a simple sorting call! Hope this helps someone else too. Just my 2¢.
@NonCPS should work fine. Just be sure that you are not attempting to call CPS-transformed code from inside the closure!
I also had the same issue and spent hours to find out where the problem came from. Thanks a lot jenkey for highlighting that @NonCPS must be present on all methods that might get called during the sort!
In my case the stack trace was definitely not self explanatory :
hudson.remoting.ProxyException: org.codehaus.groovy.runtime.typehandling.GroovyCastException: Cannot cast object '-1' with class 'java.lang.Integer' to class 'java.util.List' at org.codehaus.groovy.runtime.typehandling.DefaultTypeTransformation.continueCastOnSAM(DefaultTypeTransformation.java:405) at org.codehaus.groovy.runtime.typehandling.DefaultTypeTransformation.continueCastOnNumber(DefaultTypeTransformation.java:319) at org.codehaus.groovy.runtime.typehandling.DefaultTypeTransformation.continueCastOnCollection(DefaultTypeTransformation.java:267) at org.codehaus.groovy.runtime.typehandling.DefaultTypeTransformation.castToType(DefaultTypeTransformation.java:219) at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.castToType(ScriptBytecodeAdapter.java:603) at Unknown.Unknown(Unknown)
fabricepipart I've been having the same casting issues. @NonCPS annotation fixed the issue. Thanks.
Just lost two hours thinking that I'm doing something wrong.
the org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException: unclassified field java.lang.Integer Name is ** anything but helpful.
tried all possible sorts with closure, with comparator, with OrderBy, finally falled back to Script Console to learn that each attempt I made works fine.
Decided to submit a bug, but found its already here.
Would love to have proper "@NonCPS" requirement messages/exceptions or even a list to refer to rather than integer error.
Code changed in jenkins
User: Andrew Bayer
Path:
src/main/resources/org/jenkinsci/plugins/scriptsecurity/sandbox/whitelists/generic-whitelist
Log:
JENKINS-44924 Whitelist DefaultGroovyMethods for sort, unique, etc
Code changed in jenkins
User: Andrew Bayer
Path:
src/main/resources/org/jenkinsci/plugins/scriptsecurity/sandbox/whitelists/generic-whitelist
Log:
Merge pull request #191 from abayer/add-more-dgm-methods
JENKINS-44924 Whitelist DefaultGroovyMethods for sort, unique, etc
Compare:
abayer Is there any plans to close the issue? And wonder why is it considered a "Minor"?? The trivial correct groovy code just doesn't (silently!) work and that's a "small problem"?? Amazing...
Any updates on this.?I agree with Roman Sinyakov.This seems to be a serious issue which needs to be fixed.
Yes, I also agree. This has to be fixed. I've just spent two days of my life trying to figure out what was wrong with my pipeline script. My Groovy code worked perfectly in Jenkin's Script Console, but once put in a Jenkinsfile strange things happened. In my case it was not sort() but toSorted() which has the same problem, e.g. with the following code snippet
def foo = ["hello","hi","hey"].toSorted { a, b -> a.length() <=> b.length() } println foo.toString()
taken straight from the toSorted() specification in the Groovy Docs.
In Jenkin's Script Console the output was
[hi, hey, hello]
which is correct. When run from a Jenkinsfile the build's output, however, was
-1
This is not acceptable and certainly not a "Minor" issue, because it costs a lot of time and money to hunt down these kinds of bugs!
Just want to make sure I'm understanding this correctly. Jenkins has broken custom sorting in the Groovy language, expects people to write their Jenkinsfiles and Pipeline libraries in Groovy, and then marks this as a minor issue? Everyone on this thread agrees that's unacceptable,. We're sorry, but this is what it is.
I couldn't get a pipeline script to work with map sort() at all until I came across this. This might be a workaround for the issue above.
Calling sort from a function with @NonCPS annotation eg:
@NonCPS def sortExample(items) { def itemsSorted = items.sort{ it['val'] } println(itemsSorted) } node('targetnode') { def m = [ [name: 'abc', val: '123'], [name: 'zwe', val: '934'], [name: 'wxc', val: '789']] println m sortExample(m) }
resulted in a full map sort dependent on the value of a specific key:
[Pipeline] echo [{name=abc, val=123}, {name=zwe, val=934}, {name=wxc, val=789}] [Pipeline] echo [{name=abc, val=123}, {name=wxc, val=789}, {name=zwe, val=934}]
I found bit of helpful context about this issue :
Edit: captain obvious... I did not notice the link to this exact issue on the wiki
Here is the workaround I used to sort a list of Map objects, based on a String field:
pipeline { stages { stage("Test") { steps { script { def propList = [[name: 'B'], [name: 'A']] sortByField(propList, 'name') propList.eachWithIndex { entry, i -> println "${i}: ${entry}" } } } } } } @NonCPS static sortByField(list, fieldName) { list.sort{ it[field.
If the sort function can't work under any circumstances, shouldn't the correct behavior be to have it throw some exception rather than just pretend to work? (Emphasis for meaning, not for tone.)
I lost about a day's work here trying permutations of the following documented descriptions of array sort:
-
-
-
-
I would have greatly preferred an exception that said "We're sorry, but this is what it is." Instead I assumed that I wasn't supplying the correct arguments, mutating instead of returning a value, etc. That was a very unpleasant rabbit hole.
Can you talk more about what makes function impossible to support? For example, is it related to a threaded library or some other implementation detail that prevents the built-in function from every being translated to CPS? Is it a lack of support for the "spaceship operator"? Or is it something inherent to the sort algorithm that would make it fail even if I tried to implement my own array sort function by hand?
If you are using a reasonably recent version of workflow-cps it should have printed a warning with a link to a wiki page explaining this class of issue and workarounds.
I'm on 2.190.2 of Jenkins, with BlueOcean 1.21.0. Not sure how to find workflow-cps version.
I didn't see a warning, but my point is that it should produce an error, not a warning. What is the justification for allowing me to call a function that you know in advance will not do its job?
/pluginManager/installed; text console log in classic view, not Blue Ocean; and because the detector has false positives (it is very complicated).
Yup, DefaultGroovyMethods.sort overloads are not supported in CPS-transformed mode yet, sorry. You would need to wrap the sorting logic in a method marked @NonCPS.
|
https://issues.jenkins.io/browse/JENKINS-44924?focusedCommentId=325117&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
Libraries
This application note includes information on using libraries effectively in Particle projects.
About libraries
Libraries are packages of code to add functionality to your application firmware.
Many peripheral devices like sensors, displays, etc. include a library to access the peripheral.
Most libraries are community maintained.
Finding libraries
The web-based library search tool is often the best option:
Search and browse libraries
However, you can also use:
The Web IDE libraries icon.
The Particle CLI library search.
If you are using a Sparkfun Qwiic sensor (I2C), there is a list of sensors and their libraries in the Qwiic reference.
If you have the source to a library, see Using non-libraries and Using GitHub libraries, below, for using it with Particle Workbench.
Libraries and Workbench
Workbench project structure
Workbench projects start out with this structure:
project.properties
src/
discombobulator.cpp
Your application source resides in
src as .ino, .cpp, and .h files. In this example, we have the made-up application file
discombobulator.cpp.
Particle: Install Library
To install a library, you typically use the Command Palette (Ctrl-Shift-P on Windows and Linux and Command-Shift-P on the Mac) and select Particle: Install Library and enter the library name. In this example, the CellularHelper library has been added. This will update two things in your project:
lib/
CellularHelper/
examples/
library.properties
LICENSE
README.md
src/
CellularHelper.cpp
CellularHelper.h
project.properties
src/
discombobulator.cpp
The
project.properties file is updated with the library that you installed. For example:
name=discombobulator dependencies.CellularHelper=0.2.5
This states that the project uses the library
CellularHelper version 0.2.5.
The other thing it does is make a local copy of the library in the
lib directory. This is handy because you can view the README file, as well as browse the library source easily.
Cloud vs. local compiles
Libraries work slightly differently for local vs. cloud compiles, which can cause some confusion.
For cloud compiles, libraries in
project.properties are used even if there is a local copy downloaded in your project. This also applies to using the Particle CLI
particle compile command.
For local compiles, you must have a local copy of the library in the
lib directory. This is done automatically by Particle: Install Library in the command palette, or by using
particle library copy from the CLI.
Customizing libraries
If you are modifying a community library that you previously installed using Particle: Install Library in the command palette, or by using
particle library copy from the CLI, you should remove the dependencies entry in
project.properties in the dependency section. If you do not do this, cloud compiles will pick up the official version instead of your modified version. Even if you normally use local compiles, it's good practice to do this to prevent future confusion.
Most libraries have open source licenses that allow you to use a modified version in your project, however see Libraries and software licenses, below, for more information.
Using private libraries
Most libraries are public, which is to say any user can use and download the library source.
It is possible to make a private library by doing the
particle library upload step and not doing
particle library publish.
However, this library will be private to the account that uploaded it. There is no feature for sharing a library with a team, product, or organization. However, in Workbench there are other techniques that may be helpful in the next sections.
Using non-libraries
When using Workbench the
lib directory can contain things that are not actually Particle libraries, as long as they have the correct directory structure:
lib/
SharedUtilities/
src/
SharedUtilities.cpp
SharedUtilities.h
project.properties
src/
discombobulator.cpp
In this example, there is a "non-library" called
SharedUtilities in the
lib directory. That further contains a
src directory, which contains the actual library source.
The fake library can contain multiple files, but it should only contain .cpp and .h files. It should not have .ino files!
Using GitHub libraries
You can commit your entire project to GitHub (private or public), including the
lib directory. You can reduce code duplication and make updates easier by using Git submodules.
cd lib git submodule add
In this example, instead of cloning the repository, we use
git submodule add. This makes a copy of it locally, however when you commit your project to GitHub, it only contains a reference to the external project, not the contents of it.
If you've used Tracker Edge firmware, you've probably noticed that when you clone Tracker Edge you need to run the following command:
git submodule update --init --recursive
This is what retrieves all of the submodules in its
lib directory.
This technique is great for working on shared code across teams and projects. You have the full power of GitHub teams and organizations to safely and securely manage access to your code.
Submodules can also be used with a fork of a repository. This allows you to easily modify an existing GitHub-based library in a fork and merge an updated original version with your changes.
See also Working with GitHub for more tips for using it with Workbench.
Upgrading libraries
Particle libraries are not automatically updated when new versions come out. The easiest way to update is to delete the line for the library you want to update from project.properties and then Particle: Install Library to update the project.properties file and copy a local version.
If you are are using GitHub to manage libraries using submodules, to get the latest main (or master), you use:
git submodule update --remote
Creating public libraries
To create a new library, see contributing libraries.
If you are using Workbench, there are a few special technique that are required. See Developing libraries in Workbench for more information.
Library naming
You should only use letters A-Z and a-z, numbers, underscore, and dash. You cannot use spaces or accented characters in library names! Case is preserved, but when looking up library names, the search is case insensitive.
Library names are globally unique, even for private libraries. This sometimes causes confusion if you try to upload a new library and it fails with a permission error. Even if a library search does not show anyone using that name, if someone else has uploaded a private library with that name, you will not be able to use it.
Porting Arduino libraries
Set up file structure
Most Arduino libraries already have the correct structure, but if not you will need to move files around to make:
examples/
library.properties
src/
Additionally:
- The
srcdirectory should contain .cpp and .h files.
- The examples directory should contain zero or more example projects, with each example in a separate folder in examples.
- Example projects can only be one level deep. If there is a directory in examples with more examples, you'll need to flatten out the directory structure.
- Example source can have a single
inofile in each example project directory, or they can use
.cppfiles.
Edit library.properties
Most Arduino libraries should already have a library.properties file, but if not, you will need to create one.
Note that the name (library name) in
library.properties must match the directory name of the library. This is not a requirement for Arduino libraries, and some libraries may have a descriptive name (with spaces) in this field, and you must edit this to match.
Fix compile errors
Some libraries are easier to port than others. Many will require no modifications at all. Some common problems:
- Unnecessary includes. Things like
#include "Wire.h"are not used on the Particle platform and can be removed.
- Naming conflicts. Occasionally libraries will use variables that conflict with names that are not used in Arduino, but may be used on the Particle platform.
- If the library has large amounts of test code or code for other platforms, you may need to remove it. Otherwise it may be included in the uploaded library, and very large libraries will not load in the Web IDE.
Making modifications for inclusion in the original source
Sometimes you'll make changes to the original library and publish it. Other times, you may want your changes incorporated in the original library, typically by using a GitHub pull request. The most common way is to isolate any Particle-specific code in a
#ifdef or
#ifndef.
#ifdef PARTICLE // Particle-specific code #endif
Libraries and software licenses
There is no standard for software licenses for library code, and it is up to the library creator to assign one. Most libraries have a LICENSE file, or include the license information in the README or in the source code files.
With proprietary projects
If your application is proprietary, you must make sure that any libraries you use have a permissive license. This allows proprietization, even if the library is open source. Common permissive licenses includes:
- BSD (2 or 3-clause)
- MIT
- Apache
- Public Domain
- CC0 (Creative Commons, Level 0)
In particular, GPL and LGPL libraries cannot be used in proprietary user applications! This is even true for LGPL because of the dynamic linking rule. Since Particle libraries are statically linked to the user application, the allowance for LGPL libraries to be used in dynamically linked proprietary applications does not apply.
With open source projects
You can generally use any of the popular licenses in open source projects.
Note, however, that if you use a library that has a copyleft license, such as GPL or CC-BY-SA, then your application must generally have a similar copyleft license.
However, if you use a library with a permissive license such as MIT, you are free to release your application with permissive licenses (such as MIT, Apache, or BSD), or a copyleft license (such as GPL).
Though rare, a library with a JRL (Java Research License), AFPL (Aladdin Free Public License), or CC-BY-NC license cannot be used in a commercial product, even if open source.
Additionally, there may be a requirement to for attribution for CC-BY and some other licenses.
Less common scenarios
Libraries with a static library
It is not currently possible to create a Particle library that includes a static library of proprietary code. For example, the Bosch Sensortec BSEC library for the BME680 is not open source, but rather a closed-source library .a file that can be linked with an application. There is currently no way to include this in a cloud compile.
|
https://docs.particle.io/firmware/best-practices/libraries/
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
Thanks to @NathanaelA pledge on Patreon we just received a wonderful addition:
:sparkles: Monaco support (the editor that powers VS Code) :sparkles:
Live Demo:
Demo Code:
Binding repository:
@calibr Yes, I broke encoding. These are my final fixes to create a stable API to Yjs
Thanks for your feedback @canadaduane :)
You can absolutely do that in both v13 and v12. Shared types are just data types and of course you can nest them. But it may take some time to get used to the concept of shared data types. You should just start and observe the changes as you are doing them. Here is a template to create a file system using Y.Map as the directory and Y.Text as files. This will work in Yjs version 13 (in beta):
const yMap = doc.getMap('my-directory') const file = new Y.Text() yMap.set('index.js', file) new TextareaBinding(file, document.querySelector('textarea'))
yMap.set('index.js', file)
import React, {Component} from 'react'; import * as Y from 'yjs' import { WebsocketProvider } from 'y-websocket' const doc = Y.Doc() const provider = new WebsocketProvider('', 'roomname') provider.sync('doc') const ytext = doc.getText('my resume') class App extends Component { render() { return (<div>test</div>); } } export default App;
Thanks @crazypenguinguy I made some mistakes i the documentation. For example the url must be a ws:// or wss:// url. I corrected it. Let me know if something else is unclear.
Here is the fixed code for your demo:
import React, { Component } from "react"; import ReactDOM from "react-dom"; import * as Y from "yjs"; import { WebsocketProvider } from "y-websocket"; const doc = new Y.Doc(); //const doc = new Y.Doc() const provider = new WebsocketProvider("ws://localhost:1234", "roomname", doc); const ytext = doc.getText("my resume"); class App extends Component { render() { return <div>test</div>; } } export default App; const rootElement = document.getElementById("root"); ReactDOM.render(<App />, rootElement);
Good questions.
Comparing Yjs and Automerge:
• Both projects provide easy access to manipulating the shared data. Automerge's design philosophy is that state is immutable. In Yjs, state is mutable and observable.
• Automerge has more demo apps where state is shared to build some kind of application. But in theory, you could implement shared text editing with Automerge.
• Yjs is more focused on shared editing (on text, rich text, and structured content). It has lots of demos for different editors (quill, ProseMirror, Ace, CodeMirror, Monaco, ..).
• I spent a lot of time to optimize Yjs to work on large documents. While Automerge works fine on small documents, it has serious performance problems as the document grows in size.
• Yes, Automerge has support for hypermerge/DAT. I am also looking into it, as it seems like a really cool idea. I'm currently exploring multifeed[) for that. On the other hand Yjs has support for IPFS.
Yjs & Electron:
No problem here in general. You'll need to polyfill WebSocket / WebRTC support in electron. There is
ws for WebSocket support and
node-webrtc for WebRTC.
Query Index:
There is nothing like that to my knowledge. But maybe you could have a look at GUN
@calibr I have no experience with document indexing. Thanks for sharing your experience.
@canadaduane Thanks for your appreciation :) I was referring to the frontend of the shared editing framework. Yjs exposes mutable types (e.g. Y.Array). Automerge exposes immutable json-like objects.
In Yjs, the operation log is not immutable. I.e. it may decrease in size when you delete content. I describe some optimizations I do in the v13 log, but let me know if you want to know more.
About P2P electron apps: DAT is a very ambitious project that wants to share many, large files in a distributed network. Compared to WebRTC, UDP connections are initialized much faster and are better suited for their use-case (e.g. walking through peers of the DHT). However, If you only want to share a single document, WebRTC will work just fine and are also supported in the browser.
@/all
The Quill Editor binding was just added for v13 including shared cursors and many additional consistency tests.
Demo:
y-quill:
userOnly: true(see).
|
https://gitter.im/y-js/yjs?at=5cf5313582c2dc79a543c6ce
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
> To be honest, I see "async with" being abused everywhere in asyncio, > lately. I like to have objects with start() and stop() methods, but > everywhere I see async context managers.> > Fine, add nursery or whatever, but please also have a simple start() / > stop() public API. > > "async with" is only good for functional programming. If you want to go > more of an object-oriented style, you tend to have start() and stop() > methods in your classes, which will call start() & stop() (or close()) > methods recursively on nested resources. So of the libraries (aiopg, > I'm looking at you) don't support start/stop or open/close well. Wouldn't calling __enter__ and __exit__ manually works for you ? I started coding begin() and stop(), but I removed them, as I couldn't find a use case for them. And what exactly is the use case that doesn't work with `async with` ? The whole point is to spot the boundaries of the tasks execution easily. If you start()/stop() randomly, it kinda defeat the purpose. It's a genuine question though. I can totally accept I overlooked a valid use case. > > I tend to slightly agree, but OTOH if asyncio had been designed to not > schedule tasks automatically on __init__ I bet there would have been > other users complaining that "why didn't task XX run?", or "why do tasks > need a start() method, that is clunky!". You can't please everyone... Well, ensure_future([schedule_immediatly=True]) and asyncio.create_task([schedule_immediatly=True] would take care of that. They are the entry point for the task creation and scheduling. > > Also, in > task_list = run.all(foo(), foo(), foo()) > > As soon as you call foo(), you are instantiating a coroutine, which > consumes memory, while the task may not even be scheduled for a long > time (if you have 5000 potential tasks but only execute 10 at a time, > for example). Yes but this has the benefit of accepting any awaitable, not just coroutine. You don't have to wonder what to pass, or which form. It's always the same. Too many APi are hard to understand because you never know if it accept a callback, a coroutine function, a coroutine, a task, a future... For the same reason, request.get() create and destroys a session every time. It's inefficient, but way easier to understand, and fits the majority of the use cases. > > But if you do as Yuri suggested, you'll instead accept a function > reference, foo, which is a singleton, you can have many foo references > to the function, but they will only create coroutine objects when the > task is actually about to be scheduled, so it's more efficient in terms > of memory. I made some test, and the memory consumption is indeed radically smaller if you just store references if you just compare it to the same unique raw coroutine. However, this is a rare case. It assumes that: - you have a lot of tasks - you have a max concurrency - the max concurrency is very small - most tasks reuse a similar combination of callables and parameters It's a very specific narrow case. Also, everything you store on the scope will be wrapped into a Future object no matter if it's scheduled or not, so that you can cancel it later. So the scale of the memory consumption is not as much. I didn't want to compromise the quality of the current API for the general case for an edge case optimization. On the other hand, this is a low hanging fruit and on plateforms such as raspi where asyncio has a lot to offer, it can make a big difference to shave up 20 of memory consumption of a specific workload. So I listened and implemented an escape hatch: import random import asyncio import ayo async def zzz(seconds): await asyncio.sleep(seconds) print(f'Slept for {seconds} seconds') @ayo.run_as_main() async def main(run_in_top): async with ayo.scope(max_concurrency=10) as run: for _ in range(10000): run.from_callable(zzz, 0.005) # or run.asap(zzz(0.005)) This would only lazily create the awaitable (here the coroutine) on scheduling. I see a 15% of memory saving for the WHOLE program if using `from_callable()`. So definitly a good feature to have, thank you. But again, and I hope Yuri is reading this because he will implement that for uvloop, and this will trickles down to asyncio, I think we should not compromise the main API for this. asyncio is hard enough to grok, and too many concepts fly around. The average Python programmer has been experienced way easier things from past Python encounter. If we want, one day, that asyncio is consider the clean AND easy way to do async, we need to work on the API. asyncio.run() is a step in the right direction (although again I wish we implemented that 2 years ago when I talked about it instead of telling me no). Now if we add nurseries, it should hide the rest of the complexity. Not add to it.
|
https://mail.python.org/pipermail/python-dev/2018-July/154597.html
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
Banner
Banner ads are classic static banners, usually located at the bottom or top of the screen. Appodeal supports traditional 320x50 banners, 728x90 tablet banners and smart banners that adjust to the size and orientation of the device.
1. Manual caching
By default, auto caching is enabled: Appodeal SDK starts to load Banner right after the initialization method is called. The next banner ad starts to load after the previous one has been shown.
To disable automatic caching for banners, use the code below before SDK initialization:
Appodeal.setAutocache(false, types: .banner)
Appodeal.cacheAd(.banner)
2. Checking if banner has been loaded
You can check if the ad has been loaded before showing it. This method returns a boolean value indicating whether or not the banner has been loaded.
Appodeal.isReadyForShow(with: .bannerTop)
3. Displaying banner at the bottom of the screen
Banner is a singleton now, if you are using
bannerTop or
bannerBottom on different controllers, the SDK will use the same banner instance.
Banner ads are refreshed every 15 seconds automatically by default. To display a banner, you need to call the following code:
Appodeal.showAd(.bannerBottom, rootViewController: self)
4. Displaying banner at the top of the screen
Appodeal.showAd(.bannerTop, rootViewController: self)
5. Displaying banner at the left or right corner of the screen
If your app uses landscape interface orientation you can show Appodeal Banner at the left or right corner. The banner will have the offset according to the safe area layout guide.
Disable banner smart sizing if you use AppodealShowStyleBannerLeft or AppodealShowStyleBannerRight
// Overrides default rotation angles // Appodeal.setBannerLeftRotationAngleDegrees(90, rightRotationAngleDegrees: 180) Appodeal.showAd(.bannerLeft, forPlacement: placement, rootViewController: self) // Appodeal.showAd(.bannerRight, forPlacement: placement, rootViewController: self)
6. Displaying banner in programmatically created view
You can also add the Appodeal banner to your view hierarchy manually.
For example:
override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) if let banner = Appodeal.banner() { self.view.addSubview(banner) banner.frame = CGRect(x: 0, y: 0, width: self.view.bounds.width, height: 50) } }
Important!
BannerViewmust be on the top of the hierarchy and mustn't be overlapped by other views.
7. Using banner callbacks
Callbacks are used to track different lifecycle events of an ad, e.g., when a banner has() {}
If automatic caching is ON for the Banner ad type, do not show banner in the
bannerDidLoadAdIsPrecache callback. The banner will be refreshed automatically after the first show.
8. Hiding banner
To remove banner from your view hierarchy:
Appodeal.hideBanner()
9. Banner placements
Appodeal SDK allows you to tag each impression with different placement. To be able to use placements, you need to create them in Appodeal Dashboard. Read more about placements.
Appodeal.showAd(.bannerTop, forPlacement: placement, rootViewController: self)
Appodeal.canShow(.bannerTop, forPlacement: placement)
If you have no placements or call Appodeal.show with a placement that does not exist, the impression will be tagged with 'default' placement with corresponding settings applied.
Important!
Placement settings affect ONLY ad presentation, not loading or caching.
10. Advanced banner integration
Advanced BannerView integration
If basic integration is not appropriate for you due to the complex views hierarchy of your app, you can use
AppodealBannerView UIView subclass to integrate banners.
import UIKit import Appodeal class YourViewController : UIViewController, AppodealBannerViewDelegate { override func viewDidLoad () { super.viewDidLoad() // required: init ad banner var bannerView: AppodealBannerView! bannerView.init(size: bannerSize, rootViewController: self); // optional: set delegate bannerView.setDelegate(self); // required: add banner to superview and call -loadAd to start banner loading self.view addSubview(bannerView); bannerView") } }
11. Enabling smart banners
Smart banners are banner ads which automatically fit the screen/container size. Using them helps to deal with increasing fragmentation of the screen sizes on different devices. To enable them, use the following method:
//for top/bottom banners allows banner view to resize automatically to fit device screen Appodeal.setSmartBannersEnabled(true) //for banner view allows banner view to resize automatically to fit device screen bannerView.usesSmartSizing = true
12. Changing banner background
This method allows to create a grey background for banner ads:
//for top/bottom banners Appodeal.setBannerBackgroundVisible(true) //for bannerView bannerView.backgroundVisible = true
13. Enabling banner refresh animation
//for top/bottom banners Appodeal.setBannerAnimationEnabled(true) //for bannerView bannerView.bannerAnimationEnabled = true
14. Getting predicted eCPM
This method returns the expected eCPM for the cached ad. The amount is calculated based on historical data for the current ad unit.
Appodeal.predictedEcpm(for: .banner)
15. Checking if banner has been initialized
Appodeal.isInitialized(for: .banner)
Returns
true if banner was initialized.
16. Checking if autocache is enabled for banner
Appodeal.isAutocacheEnabled(.banner)
Returns
true if autocache is enabled for banner.
|
https://wiki.appodeal.com/en/ios-beta-3-0-0/get-started/ad-types/banner
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
Aspose.Words for C++ 22.3 Release Notes
Major Features
We have added the following features from Aspose.Words for .NET on this regular monthly release:
- Added saving to PDF 2.0 and several other improvements in PDF output.
- Improved DML chart axis scaling algorithm.
- Saving progress notifications were extended for TXT format.
- Improved table comparing algorithm.
Limitations and API Differences
Aspose.Words for C++ has some differences as compared to its equivalent .NET version of the API. This section contains information about all such functionality that is not available in the current release. The missing features will be added in future releases.
- The current release does not support Metered license.
- The current release does not support LINQ and Reporting features.
- The current release does not support OpenGL 3D Shapes rendering.
- The current release does not support loading PDF documents.
- The current release has limited support for database features - C++ doesn’t have common API for DB like .NET System.Data.
- The current release supports Microsoft Visual C++ version 2017 or higher.
- The current release supports GCC 6.3 or higher and Clang 3.9.1 or higher on Linux and only for the x86_x64 platform.
- The current release supports macOS Big Sur or later (11.5+) for 64-bit Intel Mac platform.
Full List of Issues Covering all Changes in this Release (Reported by C++ Users)
Full List of Issues Covering all Changes in this Release (Reported by .NET Users)
Full List of Issues Covering all Changes in this Release (Reported by Java Users)
Public API and Backward Incompatible Changes
This section lists public API changes that were introduced in Aspose.Words 22.3. saving to PDF 2.0 and several other improvements in PDF output
Related issue: WORDSNET-23250
- New value added to PdfCompliance enum
public enum PdfCompliance { /// <summary> /// The output file will comply with the PDF 2.0 (ISO 32000-2) standard. /// </summary> Pdf20 }
- Improvements in PDF digital signatures Changed PDF digital signature type from “adbe.pcks7.sha1” to “adbe.pcks7.detached” to fit all supported PDF versions. Added PdfDigitalSignatureHashAlgorithm.RipeMD160 value. PdfDigitalSignatureHashAlgorithm.Sha1 and PdfDigitalSignatureHashAlgorithm.Md5 values are marked as obsolete. Default value for PdfDigitalSignatureDetails.HashAlgorithm changed from PdfDigitalSignatureHashAlgorithm.Sha512 to PdfDigitalSignatureHashAlgorithm.Sha256. SHA256 is most popular hashing algorithm, it is strong enough and it is used by default by Adobe Acrobat when signing the document.
public enum PdfDigitalSignatureHashAlgorithm { /// <summary> /// SHA-1 hash algorithm. /// </summary> [Obsolete("SHA-1 hash algorithm has been deprecated in latest PDF specification. Please, use the other hash algorithm instead.")] Sha1, /// <summary> /// MD5 hash algorithm. /// </summary> [Obsolete("MD5 hash algorithm has been deprecated in latest PDF specification. Please, use the other hash algorithm instead.")] Md5, /// <summary> /// RIPEMD-160 hash algorithm. /// </summary> RipeMD160, } public class PdfDigitalSignatureDetails { /// <summary> /// Gets or sets the hash algorithm. /// </summary> /// <remarks>The default value is the SHA-256 algorithm.</remarks> public PdfDigitalSignatureHashAlgorithm HashAlgorithm { get; set; } }
- Improvements in PDF encryption Removed PdfEncryptionAlgorithm enum and encryptionAlgorithm parameter from PdfEncryptionDetails constructor. This is a breaking change. Now PDF 1.7 output is encrypted with AES-128 encryption algorithm and PDF 2.0 output with AES-256 algorithm. Updated XML comments on PdfPermissions to fit current algorithms.
public class PdfSaveOptions { /// <summary> /// Gets or sets the details for encrypting the output PDF document. /// </summary> /// <remarks> /// <para>The default value is null and the output document will not be encrypted. /// When this property is set to a valid <see cref="PdfEncryptionDetails"/> object, /// then the output PDF document will be encrypted.</para> /// <para>AES-128 encryption algorithm is used when saving to PDF 1.7 based compliance (including PDF/UA-1). /// AES-256 encryption algorithm is used when saving to PDF 2.0 based compliance.</para> /// <para>Encryption is prohibited by PDF/A compliance. This option will be ignored when saving to PDF/A.</para> /// <para><see cref="PdfPermissions.ContentCopyForAccessibility"/> permission is required by PDF/UA compliance /// if the output document is encrypted. This permission will automatically used when saving to PDF/UA.</para> /// <para><see cref="PdfPermissions.ContentCopyForAccessibility"/> permission is deprecated in PDF 2.0 format. /// This permission will be ignored when saving to PDF 2.0.</para> /// </remarks> public PdfEncryptionDetails EncryptionDetails { get; set; } } public class PdfEncryptionDetails { /// <summary> /// Initializes an instance of this class. /// </summary> public PdfEncryptionDetails(string userPassword, string ownerPassword); } public enum PdfPermissions { /// <summary> /// Disallows all operations on the PDF document. /// This is the default value. /// </summary> DisallowAll, /// <summary> /// Allows all operations on the PDF document. /// </summary> AllowAll, /// <summary> /// Copy or otherwise extract text and graphics from the document by operations other than that controlled /// by <see cref="ContentCopyForAccessibility"/>. /// </summary> ContentCopy, /// <summary> /// Extract text and graphics (in support of accessibility to users with disabilities or for other purposes). /// </summary> ContentCopyForAccessibility, /// <summary> /// Modify the contents of the document by operations other than those controlled by /// <see cref="ModifyAnnotations"/>, <see cref="FillIn"/>, and <see cref="DocumentAssembly"/>. /// </summary> ModifyContents, /// <summary> /// Add or modify text annotations, fill in interactive form fields, and, if <see cref="ModifyContents"/> is /// also set, create or modify interactive form fields (including signature fields). /// </summary> ModifyAnnotations, /// <summary> /// Fill in existing interactive form fields (including signature fields), even if <see cref="ModifyContents"/> /// is clear. /// </summary> FillIn, /// <summary> /// Assemble the document (insert, rotate, or delete pages and create document outline items or thumbnail /// images), even if <see cref="ModifyContents"/> is clear. /// </summary> DocumentAssembly, /// <summary> /// Print the document (possibly not at the highest quality level, depending on whether /// <see cref="HighResolutionPrinting"/> is also set). /// </summary> Printing, /// <summary> /// Print the document to a representation from which a faithful digital copy of the PDF content could be /// generated, based on an implementation-dependent algorithm. When this flag is clear (and /// <see cref="Printing"/> is set), printing shall be limited to a low-level representation of the appearance, /// possibly of degraded quality. /// </summary> HighResolutionPrinting }
- Several options in PdfSaveOptions cannot be used when saving PDF 2.0
public class PdfSaveOptions { /// <summary> /// Gets or sets a value determining whether or not to substitute TrueType fonts Arial, Times New Roman, /// Courier New and Symbol with core PDF Type 1 fonts. /// </summary> /// <remarks> ... /// <para>Core fonts are not supported when saving to PDF 2.0 format. <c>false</c> value will be used /// automatically when saving to PDF 2.0.</para> ... /// </remarks> public bool UseCoreFonts { get; set; } /// <summary> /// Gets or sets a value determining the way <see cref="Document.CustomDocumentProperties"/> are exported to PDF file. /// </summary> /// <remarks> ... /// <para><see cref="PdfCustomPropertiesExport.Standard"/> value is not supported when saving to PDF 2.0. /// <see cref="PdfCustomPropertiesExport.Metadata"/> will be used instead. /// </para> /// </remarks> public PdfCustomPropertiesExport CustomPropertiesExport { get; set; } }
- Changes related to obsolete PdfCompliance enum values:
- Removed obsolete PdfCompliance.Pdf15
- Removed Obsolete attribute from PdfCompliance.PdfA1b and PdfCompliance.PdfA1a
|
https://docs.aspose.com/words/cpp/aspose-words-for-cpp-22-3-release-notes/
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
.NET Reunified : Announcing .NET 5.0 🚀
And how to migrate.
On November 10th 2020, Microsoft announced .NET 5.0, marking an important step forward for developers working across desktop, Web, mobile, cloud and device platforms. In fact, .NET 5 is that rare platform update that unifies divergent frameworks, reduces code complexity and significantly advances cross-platform reach. NET 5.0 is already battle-tested by being hosted for months at dot.net and Bing.com (version).
What & Why
With this release (of course after on preview for around an year) they merged the source code streams of several key frameworks — .NET Framework, .NET Core and Xamarin/Mono. The effort was to even unify threads that separated at inception at the turn of the century, and provide developers one target framework for their work.
Mark Michaelis on MSDN Magazine said “. ”
There are many important improvements in .NET 5.0:
- have enhanced performance for Json serialization, regular expressions, and HTTP (HTTP 1.1, HTTP/2). They are also are now completely annotated for nullability.
- P95 latency has dropped due to refinements in the GC, tiered compilation, and other areas.
- Application deployment options are better, with ClickOnce client app publishing, single-file apps, reduced container image size, and the addition of Server Core container images.
- Platform scope expanded with Windows Arm64 and WebAssembly.
Performance!
For anyone interested in .NET and performance, garbage collection is frequently top of mind. Lots of effort goes into reducing allocation, not because the act of allocating is itself particularly expensive, but because of the follow-on costs in cleaning up after those allocations via the garbage collector (GC). No matter how much work goes into reducing allocations, however, the vast majority of workloads will incur them, and thus it’s important to continually push the boundaries of what the GC is able to accomplish, and how quickly.
This release has seen a lot of effort go into improving the GC. For example,
- dotnet/coreclr#25986 implements a form of work stealing for the “mark” phase of the GC
- dotnet/runtime#35896 optimizes decommits on the “ephemeral” segment (gen0 and gen1 are referred to as “ephemeral” because they’re objects expected to last for only a short time). Decommitting is the act of giving pages of memory back to the operating system at the end of segments after the last live object on that segment.
- dotnet/runtime#32795, which improves the GC’s scalability on machines with higher core counts by reducing lock contention involved in the GC’s scanning of statics.
- dotnet/runtime#37894, which avoids costly memory resets (essentially telling the OS that the relevant memory is no longer interesting) unless the GC sees it’s in a low-memory situation.
- dotnet/coreclr#27729, which reduces the time it takes for the GC to suspend threads, something that’s necessary in order for it to get a stable view so that it can accurately determine which are being used.
Not only GC, NET 5 is an exciting version for the Just-In-Time (JIT) compiler, too, with many improvements of all manner finding their way into the release. In .NET Core 3.0, over a thousand new hardware intrinsics methods were added and recognized by the JIT to enable C# code to directly target instruction sets like SSE4 and AVX2 (see the docs). These were then used to great benefit in a bunch of APIs in the core libraries. However, the intrinsics were limited to x86/x64 architectures. In .NET 5, a ton of effort has gone into adding thousands more, specific to ARM64. Text processing helpers like
System.Char received some nice improvements in .NET 5. For example, dotnet/coreclr#26848 improved the performance of
char.IsWhiteSpace by tweaking the implementation to require fewer instructions and less branching, also, not to mention the System.Text.RegularExpressions which has received myriad of performance improvements.
C# 9
C# 9.0 adds the following features and enhancements to the C# language:
- Records
- Init only setters
- Top-level statements
- Pattern matching enhancements
- Native sized integers
- Function pointers
- Suppress emitting localsinit flag
- Target-typed new expressions
- static anonymous functions
- Target-typed conditional expressions
- Covariant return types
- Extension
GetEnumeratorsupport for
foreachloops
- Lambda discard parameters
- Attributes on local functions
- Module initializers
- New features for partial methods
As an example, take a look at Top-level statements. Top-level statements remove unnecessary ceremony from many applications. Consider the canonical “Hello World!” program:
There’s only one line of code that does anything. With top-level statements, you can replace all that boilerplate with the
using statement and the single line that does the work:
If you wanted a one-line program, you could remove the
using directive and use the fully qualified type name:
System.Console.WriteLine("Hello World!");
Eat that Python!
What's new in C# 9.0 - C# Guide
C# 9.0 adds the following features and enhancements to the C# language: Init only setters Top-level statements Pattern…
docs.microsoft.com
EF Core 5.0
The foundation from 3.1 enabled the Microsoft team and community to deliver an astonishing set of new features for EF Core 5.0. Some of the highlights from the 81 significant enhancements include:
- Many-to-many relationship mapping
- Table-per-type inheritance mapping
- IndexAttribute to map indexes without the fluent API
- Database collations
- Filtered Include
- Simple logging
- Exclude tables from migrations
- Split queries for related collections
- Event counters
- SaveChanges interception and events
- Required 1:1 dependents
- Migrations scripts with transactions
- Rebuild SQLite tables as needed in migrations
- Mapping for table-valued functions
- DbContextFactory support for dependency injection
- ChangeTracker.Clear to stop tracking all entities
- Improved Cosmos configuration
- Change-tracking proxies
- Property bags
These new features are part of a larger pool of changes:
- Over 230 enhancements
- Over 380 bug fixes
- Over 80 cleanup and API documentation updates
- Over 120 updates to documentation pages
As an example, In EF Core up to and including 3.x, it is necessary to include an entity in the model to represent the join table when creating Many-to-many relationship mapping, and then add navigation properties to either side of the many-to-many relations that point to the join entity instead:
And then in the Db context,
But with EF Core 5.0, you can do,
When Migrations (or
EnsureCreated) are used to create the database, EF Core will automatically create the join table.
Migrate from .NET Core 3.1 to .NET 5.0
Take a look here to see what are the breaking changes in .NET 5.0
Breaking changes, version 3.1 to 5.0 - .NET Core
If you're migrating from version 3.1 of .NET Core, ASP.NET Core, or EF Core to version 5.0 of .NET, ASP.NET Core, or EF…
docs.microsoft.com
I will start with an eample project I created a while back for my .NET Core Authentication From Scratch series.
.NET Core 3.0 (Preview 4) Web API Authentication from Scratch (Part 3): Token Authentication
JSON Web Tokens (JWT)
medium.com
We will start with the source code of the same project, the technology stack of the projects is as follows.
- Asp.Net Core 3.1 Web API
- Entity Framework Core 3.1 (Code First)
- SQL Server Database
nishanc/WebApiCore31
Contribute to nishanc/WebApiCore31 development by creating an account on GitHub.
github.com
Before we do anything, if you’re using Visual Studio, you need to update it to version 16.8, If you’re using .NET Core CLI, you need to download .NET 5.0 SDK from here.
After updating and restarting your PC, try to create a new .NET Core Web Application. If everything was installed properly, you will be able to select .NET Core 5.0 option.
In either case if you open up a command prompt and execute
dotnet --info you should see .NET 5.0 SDK listed.
You also might want to update
dotnet-ef tool as well (Entity Framework Core .NET Command-line Tools) to version 5.0. Check for your version by executing
dotnet ef in
cmd. If it’s not 5.0, update it using following command.
dotnet tool update --global dotnet-ef --version 5.0.0
Let’s open our project from VSCode or any other text editor. We need to update few things in the
.csproj file. (If you’re using Visual Studio open up the
.csproj file by
right click on the project — >
Edit project file.
Cool. Now, edit the
<TargetFramework> from
netcoreapp3.1 to
net5.0
<TargetFramework>net5.0</TargetFramework>
And for other
<PackageReference/> items should be updated to the latest version. Specially the packages start with
Microsoft.AspNetCore and
Microsoft.EntityFrameworkCore. As an eample
Microsoft.AspNetCore.Authentication.JwtBearer should have
Version="5.0.0". Make sure to update other 3rd party packages also to the latest version if you find something not working properly. Just search it in the nuget gallery and get the version.
My
csproj before updating.
My
csproj after updating.
Now, open up a terminal and execute
dotnet restore to update the packages.
That’s it, now you should be able to run the project as usual. Updated project is available here.
nishanc/WebApiNet50
Contribute to nishanc/WebApiNet50 development by creating an account on GitHub.
github.com
Conclusion
We’ll wait and see what more things are there to come with new updates. Migrating should be a flawless process, not like migrating from .NET Core 2.1 to 3.0, remember those days? You might run into some other issues, but hey, the whole community is migrating as we speak, there should be fix for whatever problems you face.
Take look at Microsoft docs too.
Migrate from ASP.NET Core 3.1 to 5.0
By Scott Addie This article explains how to update an existing ASP.NET Core 3.1 project to ASP.NET Core 5.0. The Visual…
docs.microsoft.com
Happy coding! Stay safe!
References
Announcing .NET 5.0 | .NET Blog
We're excited to release .NET 5.0 today and for you to start using it. It's a major release - including C# 9 and F# 5 …
devblogs.microsoft.com
.NET 5.0 Runtime Epics · Issue #37269 · dotnet/runtime
NET 5.0 Runtime Epics The .NET 5.0 release is composed of many improvements and features. This issue lists the "epics"…
github.com
Performance Improvements in .NET 5 | .NET Blog
In previous releases of .NET Core, I've blogged about the significant performance improvements that found their way…
devblogs.microsoft.com
Regex Performance Improvements in .NET 5 | .NET Blog
The System.Text.RegularExpressions namespace has been in .NET for years, all the way back to .NET Framework 1.1. It's…
devblogs.microsoft.com
C# - .NET Reunified: Microsoft's Plans for .NET 5
July 2019 Volume 34 Number 7 By Mark Michaelis | July 2019 When Microsoft announced .NET 5 at Microsoft Build 2019 in…
docs.microsoft.com
What's new in C# 9.0 - C# Guide
C# 9.0 adds the following features and enhancements to the C# language: Init only setters Top-level statements Pattern…
docs.microsoft.com
Announcing the Release of EF Core 5.0 | .NET Blog
Jeremy Today, the Entity Framework team is delighted to announce the release of EF Core 5.0. This is a general…
devblogs.microsoft.com
|
https://nishanc.medium.com/net-reunified-announcing-net-5-0-c10999f6ccca?source=user_profile---------8----------------------------
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
Email is a method of exchanging messages between people using electronic devices. It is a widely-used communication medium that can also be used for signing up in the application. Keeping this in mind, lots of invalid emails can cause the use of high bandwidth resulting in an increase in cost. So, the email field should be verified earlier before it ping to the SMTP server. Python provides a couple of methods to check whether an email is valid or not.
Methods to verify email in Python
In this tutorial, we will discuss three ways to verify email. They are as follows:
- Using Python Regex
- Using third-party libraries
Using Python Regex
A regular expression(Regex) is a sequence of characters defining a search pattern. It can be used to check if a particular string contains the specified search pattern.
Python provides a built-in module called
re, that provides regular expression matching operations. We can simply call in our program as:
import re
To verify email patterns, we will use
re module one of the function called search().
re.search()
It returns a Match object if there is a match in a specified pattern any part in the string else returns
None.
Code:
# import regex module import re # finalize email regex pattern pattern = "^[A-Za-z0-9]+[\._]?[A-Za-z0-9]+[@]\w+[.]\w{2,3}$" def verify_email(email: str): """function to verify email patterns""" verify = re.search(pattern, email) # check if verify object returns # Match object or None if verify: print("Email verified") else: print("Email not verified") email1 = "pythonsansar@example.com" verify_email(email1) email2 = "psansar77.com" verify_email(email2) email3 = "python.sansar@com" verify_email(email3) email4 = "python.sansar@example.com" verify_email(email4) email5 = "PYTHON.SANSAR@EXAMPLE.COM" verify_email(email4)
Output:
Email verified Email not verified Email not verified Email verified Email verified
Using third-party libraries
verify-email and validate_email are Python third-party libraries for checking email patterns as well as their existence.
Note: Python regex module re, we use above only check the format of email patterns, and it does not tell about the email really exists or not.
Library: verify-email
verify-email can verify any email address by efficiently checking the domain name and pinging the handler to verify its existence.
pip install verify-email
Code:
from verify_email import verify_email # dummy email, not exists email1 = "pythonsansar@example.com" # check email1 print(email1, verify_email(email1)) # email exists email2 = "pythonsansar@gmail.com" # check email2 print(email2, verify_email(email2)) # email not formatted email3 = "psansar77.com" # check email3 print(email3, verify_email(email3))
Output:
pythonsansar@example.com False pythonsansar@gmail.com True psansar77.com False
Library: validate_email
validate_email is a package for Python that check if an email is valid, properly formatted, and really exists.
pip install validate_email
Code:
from validate_email import validate_email # dummy email, not exists email1 = "pythonsansar@example.com" # check email1 print(email1, validate_email(email1)) # email exists email2 = "pythonsansar@gmail.com" # check email2 print(email2, validate_email(email2)) # email not formatted email3 = "psansar77.com" # check email3 print(email3, validate_email(email3))
Output:
pythonsansar@example.com True pythonsansar@gmail.com True psansar77.com False
By default, the validate_email library only checks the string matches with the email patterns or not.
|
https://pythonsansar.com/how-to-verify-email-in-python/
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
We've looked at ways to mock methods in Swift. But what about standalone functions? Is there a way to mock them as well?
Yes! Not only can we mock Swift standalone functions, but we can do it without changing the call sites.
Methods vs. Standalone Functions
Methods are functions associated with a type. Mock the type, and you can intercept the method. In Swift, we prefer to do this with protocols. When we don't control the type signature, we can fall back on partial mocking.
But a function lives on its own, without any associated data. In other words, it has no self.
Looking for “Seams”
In other languages, standalone functions present a problem. We still want to intercept these calls, for two main reasons:
- To spy on the arguments a function receives.
- To stub any return values.
But a standalone function, living on its own, is a locked-down dependency. How can we intercept it?
Let's see if we can identify a “seam.”
Disclosure: The book links below are affiliate links. If you buy anything, I earn a commission, at no extra cost to you.
In Working Effectively with Legacy Code, here’s how Michael Feathers defines “seam”:
A seam is a place where you can alter behavior in your program with editing in that place.
In other words, we could always create some sort of wrapper with a different name and call it instead. But that’s not ideal, because we’d have to change the call sites to use this other name. It would be better if we could leave the call sites alone.
Example: Let’s Mock the Precondition Function
In the Marvel Browser TDD sample app, I used Domain Driven Design to figure out that I was missing an object. That object is a NetworkRequest, a wrapper for URLSessionTask. It has a start method that so far looks like this:
func start(_ task: URLSessionTaskProtocol) { currentTask = task task.resume() }
I want to make this safer by forbidding start if currentTask is non-nil. At first, I thought about using assert. But the problem with assert is that it’s only active for debug builds. To keep the assertion in place for release builds, we can call precondition instead:
func start(_ task: URLSessionTaskProtocol) { precondition(currentTask == nil) // ? currentTask = task task.resume() }
There are at least two seams we can explore to make this testable.
Object Seams
Working Effectively with Legacy Code offers many techniques for uncovering seams. Remember, we want to change the behavior without altering the call site.
One way is to promote the function call to a method call. What happens if we add a precondition method to the NetworkRequest class? We can do this by copying its signature (except we have to use an explicit empty string for the default message).
class NetworkRequest { // ... func precondition(_ condition: @autoclosure () -> Bool, _ message: @autoclosure () -> String = "", file: StaticString = #filePath, line: UInt = #line) { Swift.precondition(condition, message, file: file, line: line) } // ... }
All this does is delegate to Swift’s built-in precondition function. The call site
precondition(currentTask == nil)
is now equivalent to
self.precondition(currentTask == nil)
but the self-dot is implied.
Now for testing purposes, we can override that method in a test-specific subclass:
class TestableNetworkRequest: NetworkRequest { var preconditionFailed = false override func precondition(_ condition: @autoclosure () -> Bool, _ message: @autoclosure () -> String = "", file: StaticString = #filePath, line: UInt = #line) { if !condition() { preconditionFailed = true } } }
The System Under Test (SUT) will be a TestableNetworkRequest instead of a NetworkRequest. This lets us write the unit test:
func test_start_withExistingTask_shouldFailPrecondition() { sut.start(fakeTask) sut.start(fakeTask) XCTAssertTrue(sut.preconditionFailed, "Expected precondition failure") }
Namespace Seams
Promoting precondition to a method works great when we call it from a class. But what if we want to call it from a struct or an enum? Then we can’t create a test-specific subclass.
A StackOverflow answer by Nikolaj Schumacher shows another way. The fully-qualified name of Swift’s built-in precondition function is Swift.precondition. How does the call site know what to call?
Swift uses some kind of namespace resolution. I don’t know the details of the resolution rules. (Perhaps someone can explain in the comments how imported frameworks work.) But at the very least, the compiler looks first within the namespace of your current target. Failing that, it will fall back to the Swift namespace.
So we can promote the function call from the Swift namespace to our own. We do this by defining our own precondition function, with some helper closures:
let defaultPrecondition = { Swift.precondition($0, $1, file: $2, line: $3) } var evaluatePrecondition: (Bool, String, StaticString, UInt) -> Void = defaultPrecondition func precondition(_ condition: @autoclosure () -> Bool, _ message: @autoclosure () -> String = "", file: StaticString = #filePath, line: UInt = #line) { evaluatePrecondition(condition(), message(), file, line) }
As you can see, precondition calls a global closure evaluatePrecondition. By default, it uses defaultPrecondition which calls Swift.precondition. So we preserve the original behavior.
With a global closure, tests need to be careful to set up and restore it. We can do this using XCTestCase’s setUp and tearDown.
var preconditionFailed = false override func setUp() { super.setUp() sut = NetworkRequest() evaluatePrecondition = { condition, message, file, line in if !condition { self.preconditionFailed = true } } } override func tearDown() { sut = nil evaluatePrecondition = defaultPrecondition super.tearDown() }
Now we can write our unit test:
func test_start_withExistingTask_shouldFailPrecondition() { sut.start(fakeTask) sut.start(fakeTask) XCTAssertTrue(preconditionFailed, "Expected precondition failure") }
Update: Mach Seam for assert/precondition
The object seam and namespace seam techniques work for any Swift standalone functions. But for Swift’s assert or precondition calls, there’s also a Mach seam. Matt Gallagher wrote a Mach exception handler that can be inserted when running on a simulator.
Thank you to Jakub Turek for letting me know about this lower-level seam.
@qcoding while I love your global function method testing approach, there is a way easier approach for preconditions. See and especially Quick + Nimble has a built-in matcher for expect { method() }.to(throwAssertion()) built on top of that.
— Jakub Turek (@KubaTurek) December 3, 2017
Techniques for Mocking Swift Standalone Functions
We’ve looked at two techniques for controlling standalone functions in Swift:
- If you call the function from a class, promote the function call to a method call. Override it in a test-specific subclass.
- If you call the function from a type that prevents subclassing, create a function with the same signature in your code. Use closures to let you replace the behavior, with suitable defaults.
These techniques let us change the behavior without changing the call sites.
Of course, if we can’t find a suitable technique, the final fallback is to give up and change the call sites. We can then use any kind of wrapper we want.
But this can create resistance from other developers on the team. “Why should I change this calling code? It works fine!” If we can avoid changing call sites, we can reduce friction against unit testing. It also reduces the danger of not calling the wrapper.
Throw It Away and TDD It!
Let’s get meta, and step back from Swift function mocking for a bit. How does all this fit with test-driven development (TDD)?
Successful unit testing often requires us to find seams. While I learned this approach from Working Effectively with Legacy Code, it applies to test-driven code as well. Often with TDD, I’ll say to myself, “The production code will look something like this. How do I make this testable?”
That question leads me on a hunt. I use spike solutions to check:
- Does my idea of the production code even work?
- What technique can I use to unit test this?
Once the spike has given me an answer, I throw it away and start afresh. Then I can do a proper test-driven approach.
Why throw away a “working solution” just to start over with TDD?
- Strict TDD will lead to other tests. The simplest, dumbest code that passes our failing test is precondition(false). We need a test that calling start once doesn’t trigger the precondition failure.
- Refactoring can lead to different production code. Things may start the way I first imagined. But the 3-step “TDD waltz” includes refactoring. And continuous refactoring often leads to something different.
There’s a difference between code with unit tests added afterward, and code made with TDD. So don’t stop at “I can write a unit test for this.” You solved the hardest part, so go all the way!
Have you read the Legacy Code book? What did you get out of it? Please share in the comments below.
Yes, we can mock Swift standalone functions (not just methods)
Always Interesting to read your articles. Thanks.
Does this approach work with assert? Tried to test but no success, fake assert was never called and test always stops on the real assert in the code.
Pavel, I assume you’re trying the namespace approach. Make sure your assert code is included in your main target, not your test target. Otherwise it will already be compiled as Swift.assert and your test won’t be able to change it.
Also, check out the update I added to the article, about a Mach exception handling seam. For assert/precondition specifically, it offers another way to trap the calls.
|
https://qualitycoding.org/mocking-standalone-functions/
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
Hello,
I created a script in Python to convert a txt tab-delimited table into a Geodatabase table using the ‘Table to Table’ tool. When I run the script it creates the table, but field names and data types are not following the script parameters. Fields in the output table are generally named ‘field1’ ‘field2’ etc… and the ‘Double’ data type are converted as ‘text’.
This is the script:
arcpy.TableToTable_conversion(TXT, GDB, 'Output_Table', '#', r'IDCODE "Field1" true true false 4 Long 0 0 ,First,#,TXT,Field1,-1,-1;PARAM "Field2" true true false 255 Double 5 10 ,First,#,TXT,Field2,-1,-1;LON "Field3" true true false 255 Double 5 10 ,First,#,TXT,Field3,-1,-1;LAT "Field4" true true false 255 Double 5 10 ,First,#,TXT,Field4,-1,-1;VALUE "Field5" true true false 255 Double 5 10 ,First,#,TXT,Field5,-1,-1;REF_ID "Field6" true true false 4 Long 0 0 ,First,#,TXT,Field6,-1,-1;COMMENT "Field7" true true false 255 Text 0 0 ,First,#,TXT.txt,Field7,-1,-1', '#')
...
TXT and GDB are two variables containing the paths for the .txt file and the output .gdb
The 7 fields should be named: IDCODE, PARAM, LON, LAT, VALUE, REF_ID, COMMENT (I also tried to replace every 'field1,2..etc' in the code above, but I got the same outcome)
Could someone help with this please? is there something wrong in the script os is it a bug in arcpy.TableToTable_conversion?
Thanks
Solved! Go to Solution.
Thanks for the image of the table. It was key to the answer. And, the tool acts differently outside ArcMap; it creates its own data map apparently ignoring the parameter (using version 10.2.1). *** EDIT: By inside ArcMap, I mean running the tool from ArcToolbox. By outside, I mean from a Python IDE with ArcMap closed, although running the results snippet in ArcMap's Python window showed similar issues. ***
So, regarding the table (a tab delimited text file): ArcMap sees a column with a 0 (integer) in the first row and 22.32 (double) in the second as "text". My first recommendation is to change the numbers in the first row from this:
100 0 0 0 0 200 !! 101 10 22.32 67.58 5.12 200 !! 102 20 50.55 84.12 3.65 200 !!
To this (so ArcMap will see them as doubles):
100 0 0.0 0.0 0.0 200 !! 101 10 22.32 67.58 5.12 200 !! 102 20 50.55 84.12 3.65 200 !!
Since there is no header row with field names, ArcMap will default to "Field1", "Field2", etc. So my second suggestion is to add a tab delimited header row, such as:
IDCODE PARAM LON LAT VALUE REF_ID COMMENT 100 0 0.0 0.0 0.0 200 !! 101 10 22.32 67.58 5.12 200 !! 102 20 50.55 84.12 3.65 200 !!
This will name your fields with something meaningful.
When I ran the tool outside ArcMap, I first used the following code with a data file without a header:
import arcpy TXT = r"C:/Path/To/data.txt" GDB = r"C:/Path/To/Default.gdb" TBL = "table2table" arcpy.TableToTable_conversion(TXT,GDB,TBL,"#","""IDCODE "Field1" true true false 4 Long 0 0 ,First,#,TXT,Field1,-1,-1;PARAM "Field2" true true false 4 Long 0 0 ,First,#,TXT,Field2,-1,-1;LON "Field3" true true false 255 Double 0 0 ,First,#,TXT,Field3,-1,-1;LAT "Field4" true true false 255 Double 0 0 ,First,#,TXT,Field4,-1,-1;VALUE "Field5" true true false 255 Double 0 0 ,First,#,TXT,Field5,-1,-1;REF_ID "Field6" true true false 4 Long 0 0 ,First,#,TXT,Field6,-1,-1;COMMENT "Field7" true true false 255 Text 0 0 ,First,#,TXT,Field7,-1,-1""","#")
Although the map contains the target field names ("IDCODE", etc.), when the table was created, "Field1"..."Field7" were used.
I then used the version of the data table with the header (field names) and removed the field mapping, and it created the expected result. NOTE: This was tested with ArcMap version 10.2.1.
import arcpy TXT = r"C:/Path/To/data.txt" GDB = r"C:/Path/To/Default.gdb" TBL = "table2table" # tested with ArcMap version 10.2.1 arcpy.TableToTable_conversion(TXT,GDB,TBL,"#","""#""","#")
Hope this helps.
Does it work ok if you run the tool stand-alone (not scripted)? I have a hard time parsing that string-based field map parameter.
Hi Micah,
yes, the stand-alone tool works correctly. Indeed I create the script simply dropping the result of the stand alone tool in to the Python window, and just replaced the Path with my variables.
If you could give a line or two example of what you expect the results to be and what you are actually seeing, that would be helpful.
Hi Joshua,
the table properties when I run the stand-alone tool look like this:
while using the arcpy script the result look like this one:
As an aside, ArcPy questions are typically posted to since ArcPy is a different API than.... I realize it is a bit confusing. The growth of places/spaces in GeoNet, even by Esri itself, appears to be taking a more organic approach, much to my chagrin.
Shall I post the same question on and post the link here so if someone else is interessed can just be redirected there?
Since we have both mentioned the Python space, it will show up in the feeds over there. I would say you are good for now with this question, in terms of posting in places/spaces.
I think your syntax to invoke field mapping is incorrect. Check the example below.
arcpy.TableToTable_conversion(in_rows="C:/tmp/SelectC01Records.txt", out_path="C:/tmp/Test.gdb", out_name="Junk", where_clause="", field_mapping="""SegCodeNew "SegCode" true true false 6 Text 0 0 ,First,#,C:\tmp\SelectC01Records.txt,SegCode,-1,-1;NM_MPNew "NM_MP" true true false 8 Double 0 0 ,First,#,C:\tmp\SelectC01Records.txt,NM_MP,-1,-1;NM_DesCDNew "NM_DesCD" true true false 4 Long 0 0 ,First,#,C:\tmp\SelectC01Records.txt,NM_DesCD,-1,-1;NM_MPDescNew "NM_MPDesc" true true false 8000 Text 0 0 ,First,#,C:\tmp\SelectC01Records.txt,NM_MPDesc,-1,-1;BrkeyNew "Brkey" true true false 8000 Text 0 0 ,First,#,C:\tmp\SelectC01Records.txt,Brkey,-1,-1""", config_keyword="")
I Michael, thanks a lot for your reply.
Unfortunately I am still getting the same issue.
|
https://community.esri.com/t5/arcgis-api-for-python-questions/arcpy-tabletotable-conversion-wrong-field-name-and/td-p/867881
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
The Winter ’15 platform release brought us the new Queueable Apex interface. This interface is a cool new way to execute asynchronous computations on the Force.com platform, given you already know @future, Scheduled Apex Jobs, and Batch Jobs.
The main differences between @future methods and Queueable Apex jobs are:
- When you enqueue a new job, you get a job ID that you can actually monitor, like batch jobs or scheduled jobs!
- You can enqueue a queueable job inside a queueable job (no more “Future method cannot be called from a future or batch method” exceptions). As for Winter ’15 release, you can chain a maximum of two queueable jobs per call (so a job can fire another job and that’s it!). With Spring ’15 release, this limit has been removed.
- You can have complex Objects (such as SObjects or Apex Objects) in the job context (@future only supports primitive data types)
All I want to do in this article is show a practical use case for this interface (for those impatients out there, the complete code of this article can be found here).
Business requirement: You have to send a callout to an external service whenever a Case is closed.
Constraints: The callout will be a REST POST method that accepts a JSON body with all the non-null Case fields that are filled exactly when the Case is closed (the endpoint of the service will be a simple RequestBin).
The Queueable Apex Use Case
Using a future method, we will pass the case ID to the job and so make a subsequent SOQL query: this is against the requirement to pass the fields we have in the Case at the exact time of the update. This might seem an excessive constraint, but with big ORGs and hundreds of future methods in execution (due to system overload), future methods can actually be executed after minutes, so the Case state can be different from when the future was actually fired.
To store the attempts of callout (and the responses, this is only a helper method that allows for reportization of the attempts) we will use a new SObject called Callout__c with the given fields:
– Case__c: master/detail on Case
– Job_ID__c: external ID / unique / case sensitive, stores the queueable job ID
– Sent_on__c: date/time, when the callout took place
– Duration__c: integer, milliseconds for the callout to be completed (we can report timeouts easily)
– Status__c: picklist, valued are Queued (default), OK (response 200), KO (response != 200) or Failed (exception)
– Response__c: long text, stores the server response
To achive the Business needs, we need a Case trigger:
trigger CaseQueueableTrigger on Case (after insert, after update) { List calloutsScheduled = new List(); for(Integer i = 0; i 0){ insert calloutsScheduled; } }
The trigger iterates bulkily through the trigger’s cases and if they are created as “Closed” or the Status field changes to “Closed,” a new job is enqueued and a Callout__c object is added to the list that will be inserted outside the “for.”
This way we always have evidence on the system that the callout has been fired.
Remember that you can add up to 50 jobs to the queue with System.enqueueJob in a single transaction, so you have to be sure that the trigger makes a maximum of 50 “System.enqueueJob” invocations (this is up to you!).
Let’s have a look at the job class:
public class CaseQueuebleJob implements Queueable, Database.AllowsCallouts { . . . }
The Queueable interface is the main reason of this article, while the Database.AllowsCallouts allow us to send a callout inside the job.
The constructor of the class consists on a single class member assignment:
/* * Case passed on class creation (the actual ticket from the Trigger) */ private Case ticket{get;Set;} /* * Constructor */ public CaseQueuebleJob(Case ticket){ this.ticket = ticket; }
Finally, let’s watch the main execute method of the job (the one that stores all the aynchronous logic):
// of execution:
1) Creates the JSON payload to be sent though the POST request (watch the method in the provided github repo) for more details (nothing more than a describe and a map).
2) Gets the Callout__c SObject that was created by the Case trigger (and using the context’s Job ID).
3) Gets the starting time of the callout being executed (to calculate the duration).
4) Tries to make the rest call
a. Server responded with a 200 OK
b. Server responded with a non OK status (e.g. 400, 500)
c. Saves the response body in the Response__c field
5) Callout failed, so the Respose__c field is filled with the stacktrace of the exception (believe me this is super usefull when trying to get what happened, expecially when you have other triggers / code in the “try” branch of the code).
6) Unfortunately, if you try to enqueue another job after a callout is done, you get the “Maximum callout depth has been reached.” exception; this is because you can have only two jobs in the queue chain that makes callouts, so if you queue another job with the Database.AllowsCallouts interface, you get this error. This way the job would have tried to enqueue another equal job for future execution.
7) Sets time fields on the Callout__c object.
8)Finally, creates an Attachment object with the JSON request done: this way it can be expected, knowing the precise state of the Case object sent, and can be re-submitted using a re-submission tool that uses the same code (it could be a Batch job for instance).
This is an example request (if you are curious about what I’m sending):
And this is an example request: { "values": { "lastmodifiedbyid": "005w0000003fj35AAA", "businesshoursid": "01mw00000009wh7AAA", "casenumber": "00001001", "ownerid": "005w0000003fj35AAA", "createddate": "2015-01-20T09:54:17.000Z", "origin": "Phone", "isescalated": false, "status": "Closed", "accountid": "001w0000019wqEIAAY", "systemmodstamp": "2015-01-20T19:33:31.000Z", "isdeleted": false, "priority": "High", "id": "500w000000fqNRaAAM", "lastmodifieddate": "2015-01-20T19:33:31.000Z", "isclosedoncreate": true, "createdbyid": "005w0000003fj35AAA", "contactid": "003w000001EetwEAAR", "type": "Electrical", "closeddate": "2015-01-20T19:19:51.000Z", "subject": "Test queueable interface", "reason": "Performance", "potentialliability": "Yes", "isclosed": true }
As already written, the full code for this Queueable Apex use case, with the related metadata, is available on this GitHub repo.
About the author and more.
His daydream is to become the first italian Force.com evalgelist, sharing his passion to all developers, and spend the rest of his professional life (and more) to learn new techs and experimenting.
|
https://developer.salesforce.com/blogs/2015/05/queueable-apex-future
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
Transitioning from React Class Components to React Hooks; Using Redux with React Hooks on the component mount.
export class Example extends React.Component { constructor(props) { super(props); this.state = { age: 5 }; } componentDidMount() { this.setState({age: this.props.age}) } render() { return ( <div> <h1> Age : {this.state.age} </h1> </div> );}}
Below is the same code but written with a functional component utilizing React Hooks. Neat right?
export function Example(props) {const [age, setAge] = useState(5);useEffect(() => {setAge(props.age);}, [props.age]);return (<div><h1> Age : {age} </h1></div>);}
The first thing you notice with the example above apart from the fact that your code is shorter is that you can now access your lifecycle method using useEffect and set state using useState. Elegant!
I like to use Redux for state management in React and with the introduction of hooks, I was sceptical about what that meant for me but with the release of version 7.1, we get support for Reach Hooks so yay! and it’s yay X10 when I found out how easy it is to implement.
How to access your Redux state using React Hooks
Do you remember how with class components, you had to do something like this to be able to access the redux state and actions?
import { connect } from 'react-redux'
import { increaseage } from "../redux/actions"class Example extends React.Component {render() {const { age } = this.props;return (<div><h1 onClick={()=>this.props.increaseage}> Age : {age} </h1></div>);}}const mapStateToProps = (state) => { const { age } = state.statename return {age} }const mapDispatchToProps = { increaseage }export default connect(mapStateToProps, mapDispatchToProps)(Example)
Well, with Hooks, those days are gone!
With React Hooks, you get useDispatch and useSelector and these two replaces the mapDispatchToProps and mapStateToProps that you get when using connect in Class components and so the code above translates to this
import React from "react";import { useDispatch, useSelector } from "react-redux";import { increaseage } from "../redux/actions"const Example = () => {const age = useSelector(state => state.statename.age);const dispatch = useDispatch();return ( <div> <h1
onClick={() => dispatch(increaseage(age++))}>
Age : {age}
</h1> </div> )}export default Example;
The useDispatch hook gives you access to the dispatch function from which you can easily call your function.
While all these are exciting, there is nothing wrong with using the React class component if you still really want to but you might just want to give Hooks a chance and I promise, you won’t hate it.
|
https://adaobiosakwe.medium.com/transitioning-from-react-class-components-to-react-hooks-using-redux-with-react-hooks-eff6ebb1d484?source=user_profile---------4----------------------------
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
In this tutorial, we’ll learn to build a Progressive Web Application (PWA) with Ionic 4 and Capacitor.
A PWA is a web application similar to traditional web apps but provides extra features for users that were only available in native mobile apps like instant loading, add-to-home screen, splash screens, access to device capabilities, push notifications and offline support.
The term “Progressive” in Progressive Web Apps refers to how these apps provide web experiences which are reliable - by loading instantly regardless of network conditions - fast - by responding immediately and smoothly to every user interaction - and engaging - by providing an immersive and natural user experience.
To achieve these demanding user experience primers, a PWA makes use of Service Workers for supporting features like instant loading and offline support so it needs to be securely served via HTTPS, as Service Workers only work under HTTPS connections. PWAs achieve support for all modern and old browsers, which is an important aspect of this kind of apps. Since they use modern features that might not be available in all browsers, PWAs make use of progressive enhancement which is using the feature when it’s available and use a fallback for browsers which don’t support it.
The application we'll be building is a simple JavaScript Jargon app that's based on the Simplified JavaScript Jargon available on GitHub. We'll export the entries as JSON data and we'll consume them from our PWA. We already created a statically generated JSON API available here.
You can find the source code of this app in this GitHub repository.
Note: If you would like to consume an API from your server, make sure you have CORS enabled in your web server. Otherwise, the web browser will block your requests due to the Same Origin Policy available on modern browsers.
Now, let’s get started!
Prerequisites
- You will need to have a development environment with Node.js and npm installed. You can install both of them by going to the official website and grab the binaries for your system.
- Familiarity with TypeScript since we’ll be using Ionic 4 with Angular.
Generating a New Ionic 4 Project
Let’s start by installing Ionic CLI globally on your system from npm using the following command:
npm install -g ionic
Using the CLI, you can generate a new project using the following command:
ionic start
The CLI will prompt you for information about your project, such as the name (enter jsjargonpwa) and starter template (choose sidemenu). This will set up your project.
When prompted for Install the free Ionic Appflow SDK and connect your app? (Y/n) just type
n.
You can now navigate to your project’s folder and serve your application locally using:
cd ./jsjargonpwa ionic serve
Your application will be available from the address.
We’ll be working on the home page, so you can remove the list page. First, delete the list folder containing the files; next, open the
src/app/app.component.ts file and delete the entry for the list page from the
appPages array:
public appPages = [ { title: 'Home', url: '/home', icon: 'home' } ];
Next, open the
src/app/app-routing.module.ts file and delete the route for the list page:
const routes: Routes = [ { path: '', redirectTo: 'home', pathMatch: 'full' }, { path: 'home', loadChildren: './home/home.module#HomePageModule' } ];
Getting JSON Data
We’ll use
HttpClient from Angular to send a GET request to our server to fetch the JSON entries. Before that, we need to import it in our project.
First, open the
src/app/home/home.module.ts file and import
HttpClientModule:
// [...] import { HttpClientModule } from '@angular/common/http'; @NgModule({ imports: [ /* [...] */ HttpClientModule ], declarations: [HomePage] }) export class HomePageModule {}
Next, open the
src/app/home/home.page.ts file and update it accordingly:
import { Component } from '@angular/core'; import { HttpClient } from '@angular/common/http'; @Component({ selector: 'app-home', templateUrl: 'home.page.html', styleUrls: ['home.page.scss'], }) export class HomePage { API_URL = ""; entries: Array<any>; constructor(private httpClient: HttpClient){ } ionViewDidEnter(){ this.getData(); } getData(){ this.httpClient.get(this.API_URL).subscribe((entries: any[])=>{ this.entries = entries; }) } }
We declare two variables:
API_URL — which holds the address of the JSON file that we need to fetch — and
entries, an array that will hold the entries.
Now, let’s inject
HttpClient as
httpClient via the component’s constructor.
Next, we add a
getData() method that calls the
get() method of
HttpClient and subscribe to the returned Observable. We then assign the fetched data to the
entries variable.
Finally, we add the
ionViewDidEnter() event that gets called when the Ionic page is loaded and we call the
getData() method to fetch the entries once the page is loaded.
Next, open the
src/app/home/home.page.html file and update it as follows:
<ion-header> <ion-toolbar> <ion-buttons <ion-menu-button></ion-menu-button> </ion-buttons> <ion-title> JSJargon </ion-title> </ion-toolbar> </ion-header> <ion-content> <ion-list <ion-item * <ion-card> <ion-card-header> <ion-card-title>{{ entry.name }}</ion-card-title> </ion-card-header> <ion-card-content> <p>{{ entry.description }}</p> </ion-card-content> </ion-card> </ion-item> </ion-list> </ion-content>
We simply loop through the
entries variable and display the name and description of each entry using an Ionic card.
This is a screenshot of the result:
Adding Capacitor
Capacitor is an open source native container (similar to Cordova) built by the Ionic team that you can use to build web/mobile apps that run on iOS, Android, Electron (Desktop), and as Progressive Web Apps with the same code base. It allows you to access the full native SDK on each platform, and easily deploy to App Stores or create a PWA version of your application.
Capacitor can be used with Ionic or any preferred frontend framework and can be extended with plugins. It has a rich set of official plugins and you can also use it with Cordova plugins.
Installing Capacitor
Let’s start by installing Capacitor in your project:
npm install --save @capacitor/cli @capacitor/core
Next, you need to initialize Capacitor with
npx cap init \[appName\] [appId]:
npx cap init jsjargon com.techiediaries.jsjargon
Using the Clipboard Plugin
Now, let’s use the Clipboard Capacitor plugin in our project to see how Capacitor works by example.
Open the
src/app/home/home.page.ts file and add:
import { Plugins } from '@capacitor/core'; const { Clipboard } = Plugins;
Next, add the
copy() method which will be used to copy a JS term to the clipboard:
async copy(name: string, text: string){ Clipboard.write({ string: name + ' is ' + text }); }
Finally, open the
src/app/home/home.page.html file and add a button to the Ionic card for each entry:
<ion-card> <ion-card-header> <ion-card-title>{{ entry.name }}</ion-card-title> </ion-card-header> <ion-card-content> <p>{{ entry.description }}</p> </ion-card-content> <ion-button (click)="copy(entry.name, entry.description)"> Copy </ion-button> </ion-card>
This is a screenshot of the result:
Adding a Web Manifest and A Service Worker
A web manifest and service worker are essential requirements for a PWA. You can add both of them using the
@angular/pwa package. In your terminal, run:
cd jsjargonpwa ng add @angular/pwa
This is a screenshot of what
@angular/pwa has added and updated in your project:
For example, a
src/manifest.webmanifest file is created and referenced in the
index.html file:
<link rel="manifest" href="manifest.webmanifest">
Also, different default icons were added in the
src/assets/icons folder. In production, you will need to change these icons with your own.
In the
src/app/app.module.ts file, a service worker is registered using the following line:
ServiceWorkerModule.register('ngsw-worker.js', { enabled: environment.production })
This is only enabled for production so you will need to build your application for production to register the service worker.
Next, we need to build our application for production using the following command:
ionic build --prod
Finally, we need to use a hosting service like Netlify to securely host the application over HTTPS (required by a PWA). Here is the link to our hosted PWA.
Conclusion
In this tutorial, we’ve seen how to create a PWA with Ionic 4, Angular and Capacitor.
We have seen an example of sending GET requests using
HttpClient and how to access the device clipboard using the Clipboard Capacitor plugin. We hope that this tutorial has been useful in helping you with your own projects.
If you're building Ionic applications with sensitive logic, be sure to protect them against code theft and reverse-engineering by following our guide.
|
https://blog.jscrambler.com/create-an-ionic-4-pwa-with-capacitor/
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
Say you have a project named SampleProject. And you want to create a new unit test suite. So you Command-N to make a new file, and select “Unit Test Case Class.”
If we give it the name AppleTests, here’s what Apple provides:
// // AppleTests.swift // SampleProjectTests // // Created by Jon Reid on 12/12/20. // import XCTest class AppleTests: XCTestCase {. } } }
It’s instructive… the first time. After that, it’s only noisy. So I use a customized file template for new unit test suites. Command-N and select “Swift XCTest Test Suite.”
It suggests a file name ending with Tests. If we give it the name QualityCodingTests, here’s what I provide:
@testable import SampleProject import XCTest final class QualityCodingTests: XCTestCase { func test_zero() throws { XCTFail("Tests not yet implemented in QualityCodingTests") } }
Isn’t that better? You can download it here:
Curious about the problems I have with Apple’s template and the decisions I made for my custom template? Read on…
What's In Apple’s Template?
Let’s look more closely at what each file template provides. We’ll start with Apple’s “Unit Test Case Class” template.
Prompting for Unnecessary Inputs
If you select the template, Xcode displays a large dialog:
It feels like this large, clunky dialog handles various dynamic options. For test suites:
- It asks for a class name but doesn’t suggest any pattern.
- It asks if we want it to be a subclass of XCTestCase, which of course we do.
- It asks for the programming language.
What do we get next? Another dialog. Xcode prompts us for the location, group, and target.
File Comment Block
Then we get the file content. It starts with a file comment block:
// // AppleTests.swift // SampleProjectTests // // Created by Jon Reid on 12/12/20. //
What do you do with these? I delete them, every time. They serve no useful purpose in a project. Even if you work at a company that requires a standard file comment block at the top of each file, it doesn’t look like this. Delete.
Import Lacking Production Code
Next, we have the import statements. Or rather, import statement, singular:
import XCTest
This is incomplete. To access your production code, you need to @testable import the module.
Placeholders for Set-Up and Tear-Down
After the class declaration, we get placeholders for set-up and tear-down:. }
These are instructive, with explanatory comments. But I prefer not to create set-up and tear-down when I start creating a new test suite. I don’t want to make assumptions about what belongs there. Instead, I code a test, then another test. Then I can begin to see what might belong in set-up.
Set-up is there to serve the tests. Wait until you have tests so you can discover what belongs there. Delete them, comments and all.
…Wait, Really Delete Those Placeholders?
You may resist the idea of deleting these function placeholders. You may want them there because you don’t want to type them in later. That’s where my test-oriented code snippets come in. I’m lazy, and don’t enjoy typing the same things over and over. So my code snippets define:
These code snippets are available separately by subscribing to Quality Coding:
Two Test Placeholders, Including a Performance Test
Finally, we get a place to put our test. But again, they are mini-tutorials:. } }
Instructive code with explanatory comments is nice the first time. After that, it’s noise. I want to start writing a new test case by typing something new, not by deleting comments.
And can I tell you how many performance test cases I’ve written? Zero. They probably have their place, just not for my needs. Delete.
I want less code, not more.
What’s In My Template?
Now let’s look at the workflow of my “Swift XCTest Test Case” template.
Simple Prompt Suggesting Naming Pattern
Here’s what Xcode shows when you select my template:
First, notice that it suggests a naming pattern. I use the suffix Tests to name test suites because a test suite holds a group of test cases. Just hit Up-Arrow to move the cursor to the beginning of the field, and start typing the rest.
Note that it doesn’t ask you what to subclass, or what programming language to use. You already selected “Swift XCTest Test Case” so we know. (The download also includes an Objective-C version.)
Then specify the location, group, and target.
No File Comment Block
The file doesn’t start with a file comment block. There’s nothing to delete. Move along, move along.
Useful import Statements
The first thing in the file is not one, but two import statements:
@testable import SampleProject import XCTest
The file template makes an educated guess about the name of your production code module. It assumes it’s the same as your project name.
This isn’t always true, of course. But even when it’s wrong, at least it shows that you should use @testable import to access the production code.
Class Declared final
The class declaration has a subtle difference from Apple’s template.
final class QualityCodingTests: XCTestCase {
I like to declare my test suites as final. Why? It’s very unusual to subclass test suites, so we don’t need dynamic dispatch to call test helpers. Private test helpers become direct function calls instead of dynamic messaging.
I doubt this makes much difference. But why leave any performance on the table? I want tests to run as fast as they possibly can.
Test Zero
Before I write the first test, I like to execute what I call Test Zero:
func test_zero() throws { XCTFail("Tests not yet implemented in QualityCodingTests") }
This is a trick I describe in my book iOS Unit Testing by Example. Test Zero helps check that the new test suite does nothing well. It’s the first check of our new infrastructure.
Once the test fails correctly, I delete it. Then using my test-oriented code snippets, I type “test” to begin writing a test case. The test suite is gloriously empty.
I’m a lazy programmer and don’t want to waste my time deleting things I don’t need, and typing things I do need. I hope you find my XCTestCase file template useful!
Be lazy. Don't waste time deleting test code you don't need, and typing test code you do need. This XCTestCase template helps.!
This is a great post and very helpful in cutting time down doing repetitive work. Thanks Jon!
Yay! You’re welcome, Tim.
|
https://qualitycoding.org/swift-unit-testing-template/
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
Creating a.
In this tutorial, we'll create a welcome bot for our programming discussion Discord server. This bot will welcome users as they join and assign them roles and private channels based on their stated interests. By the end of this tutorial, you will:
- Have familiarity with the process of creating a Discord bot application.
- Be able to use discord.py to develop useful bot logic.
- Know how to host Discord bots on Replit!
Getting started
Sign in to Replit or create an account if you haven't already. Once logged in, create a Python repl.
Creating a Discord application
Open another browser tab and visit the Discord Developer Portal. Log in with your Discord account, or create one if you haven't already. Keep your repl open – we'll return to it soon.
Once you're logged in, create a new application. Give it a name, like "Welcomer".
Discord applications can interact with Discord in several different ways, not all of which require bots, so creating one is optional. That said, we'll need one for this project. Let's create a bot.
- Click on Bot in the menu on the left-hand side of the page.
- Click Add Bot.
- Give your bot a username (such as "WelcomeBot").
- Click Reset Token and then Yes, do it!
- Copy the token that appears just under your bot's username.
The token you just copied is required for the code in our repl to interface with Discord's API. Return to your repl and open the Secrets tab in the left sidebar. Create a new secret with
DISCORD_TOKEN as its key and the token you copied as its value.
Once, you've done that, return to the Discord developer panel. We need to finish setting up our bot.
First, disable the Public Bot option – the functionality we're building for this bot will be highly specific to our server, so we don't want anyone else to try to add it to their server. What's more, bots on 100 or more servers have to go through a special verification and approval process, and we don't want to worry about that.
Second, we need to configure access to privileged Gateway Intents. Depending on a bot's functionality, it will require access to different events and sources of data. Events involving users' actions and the content of their messages are considered more sensitive and need to be explicitly enabled.
For this bot to work, we'll need to be able to see when users join our server, and we'll need to see the contents of their messages. For the former, we'll need the Server Members Intent and for the latter, we'll need the Message Content Intent. Toggle both of these to the "on" position. Save changes when prompted.
Now that we've created our application and its bot, we need to add it to a server. We'll walk you through creating a test server for this tutorial, but you can also use any server you've created in the past, as long as the other members won't get too annoyed about it becoming a bot testing ground. You can't use a server that you're just a normal user on, as adding bots requires special privileges.
Open Discord.com in your browser. You should already be logged in. Then click on the + icon in the leftmost panel to create a new server. Alternatively, open an existing server you own.
In a separate tab, return to the Discord Dev Portal and open your application. Follow these steps to add your bot to your server:
Click on OAuth2 in the left sidebar.
In the menu that appears under OAuth2, select URL Generator.
Under Scopes, mark the checkbox labelled bot.
Under Bot Permissions, mark the checkbox labelled Administrator.
Scroll down and copy the URL under Generated URL.
Paste the URL in your browser's navigation bar and hit Enter.
On the page that appears, select your server from the drop-down box and click Continue.
When prompted about permissions, click Authorize, and complete the CAPTCHA.
Return to your Discord server. You should see that your bot has just joined.
Now that we've done the preparatory work, it's time to write some code. Return to your repl for the next section.
Writing the Discord bot code
We'll be using discord.py to interface with Discord's API using Python. Add the following code scaffold to
main.py in your repl:
import os, re, discord
from discord.ext import commands
DISCORD_TOKEN = os.getenv("DISCORD_TOKEN")
bot = commands.Bot(command_prefix="!")
@bot.event
async def on_ready():
print(f"{bot.user} has connected to Discord!")
bot.run(DISCORD_TOKEN)
First, we import the Python libraries we'll need, including discord.py and its commands extension. Next we retrieve the value of the
DISCORD_TOKEN environment variable, which we set in our repl's secrets tab above. Then we instantiate a
Bot object. We'll use this object to listen for Discord events and respond to them.
The first event we're interested in is
on_ready(), which will trigger when our bot logs onto Discord (the
@bot.event decorator ensures this). All this event will do is print a message to our repl's console, telling us that the bot has connected.
Note that we've prepended
async to the function definition – this makes our
on_ready() function into a coroutine. Coroutines are largely similar to functions, but may not execute immediately, and must be invoked with the
await keyword. Using coroutines makes our program asynchronous, which means it can continue executing code while waiting for the results of a long-running function, usually one that depends on input or output. If you've used JavaScript before, you'll recognize this style of programming.
The final line in our file starts the bot, providing
DISCORD_TOKEN to authenticate it. Run your repl now to see it in action. Once it's started, return to your Discord server. You should see that your bot user is now online.
Creating server roles
Before we write our bot's main logic, we need to create some roles for it to assign. Our Discord server is for programming discussion, so we'll create roles for a few different programming languages: Python, JavaScript, Rust, Go, and C++. For the sake of simplicity, we'll use all-lowercase for our role names. Feel free to add other languages.
You can add roles by doing the following:
Right-click on your server's icon in the leftmost panel.
From the menu that appears, select Server Settings, and then Roles.
Click Create Role.
Enter a role name (for example, "python") and choose a color.
Click Back.
Repeat steps 3–5 until all the roles are created.
Your role list should now look something like this:
The order in which roles are listed is the role hierarchy. Users who have permission to manage roles will only be able to manage roles lower than their highest role on this list. Ensure that the WelcomeBot role is at the top, or it won't be able to assign users to any of the other roles, even with Administrator privileges.
At present, all these roles will do is change the color of users' names and the list they appear in on the right sidebar. To make them a bit more meaningful, we can create some private channels. Only users with a given role will be able to use these channels.
To add private channels for your server's roles, do the following:
- Click on the + next to Text Channels.
- Type a channel name (e.g. "python") under Channel Name.
- Enable the Private Channel toggle.
- Click Create Channel.
- Select the role that matches your channel's name.
- Repeat for all roles.
As the server owner, you'll be able to see these channels regardless of your assigned roles, but normal members will not.
Messaging users
Now that our roles are configured, let's write some bot logic. We'll start with a function to DM users with a welcome message. Return to your repl and enter the following code just below the line where you defined
bot:.
"""
)
This simple function takes a
member object and sends it a private message. Note the use of
await when running the coroutine
member.send().
We need to run this function when one of two things happens: a new member joins the server, or an existing member types the command
!roles in a channel. The second one will allow us to test the bot without constantly leaving and rejoining the server, and let users change their minds about what programming languages they want to discuss.
To handle the first event, add this code below the definition of
on_ready:
@bot.event
async def on_member_join(member):
await dm_about_roles(member)
The
on_member_join() callback supplies a
member object we can use to call
dm_about_roles().
For the second event, we'll need a bit more code. While we could use discord.py's bot commands framework to handle our
!roles command, we will also need to deal with general message content later on, and doing both in different functions doesn't work well. So instead, we'll put everything to do with message contents in a single
on_message() event. If our bot were just responding to commands, using
@bot.command handlers would be preferable.
Add the following code below the definition of
on_member_join():
@bot.event
async def on_message(message):
print("Saw a message...")
if message.author == bot.user:
return # prevent responding to self
# Respond to commands
if message.content.startswith("!roles"):
await dm_about_roles(message.author)
First, we print a message to the repl console to note that we've seen a message. We then check if the message's author is the bot itself. If it is, we terminate the function, to avoid infinite loops. Following that, we check if the message's content starts with
!roles, and if so we invoke
dm_amount_roles(), passing in the message's author.
Stop and rerun your repl now. If you receive a CloudFlare error, type
kill 1 in your repl's shell and try again. Once your repl's running, return to your Discord server and type "!roles" into the general chat. You should receive a DM from your bot.
Assigning roles from replies
Our bot can DM users, but it won't do anything when users reply to it. Before we can add that logic, we need to implement a small hack to allow our bot to take actions on our server based on the contents of direct messages.
The Discord bot framework is designed with the assumption that bots are generic and will be added to many different servers. Bots do not have a home server, and there's no easy way for them to trace a process flow that moves from a server to private messages like the one we're building here. Therefore, our bot won't automatically know which server to use for role assignment when that user replies to its DM.
We could work out which server to use through the user's
mutual_guilds property, but it is not always reliable due to caching. Note that Discord servers were previously known as "guilds" and this terminology persists in areas of the API.
As we don't plan to add this bot to more than one server at a time, we'll solve the problem by hardcoding the server ID in our bot logic. But first, we need to retrieve our server's ID. The easiest way to do this is to add another command to our bot's vocabulary. Expand the
if statement at the bottom of
on_message() to include the following
elif:
elif message.content.startswith("!serverid"):
await message.channel.send(message.channel.guild.id)
Rerun your repl and return to your Discord server. Type "!serverid" into the chat, and you should get a reply from your bot containing a long string of digits. Copy that string to your clipboard.
Go to the top of
main.py. Underneath
DISCORD_TOKEN, add the following line:
SERVER_ID =
Paste the contents of your clipboard after the equals sign. Now we can retrieve our server's ID from this variable.
Once that's done, return to the definition of
on_message(). We're going to add another
if statement to deal with the contents of user replies in DMs. Edit the function body so that it matches the below:
@bot.event
async def on_message(message):
print("Saw a message...")
if message.author == bot.user:
return # prevent responding to self
# NEW CODE BELOW
# Assign roles from DM
if isinstance(message.channel, discord.channel.DMChannel):
await assign_roles(message)
return
# NEW CODE ABOVE
# Respond to commands
if message.content.startswith("!roles"):
await dm_about_roles(message.author)
elif message.content.startswith("!serverid"):
await message.channel.send(message.channel.guild.id)
This new
if statement will check whether the message that triggered the event was in a DM channel, and if so, will run
assign_roles() and then exit. Now we need to define
assign_roles(). Add the following code above the definition of
on_message():
async def assign_roles(message):
print("Assigning roles...")
languages = set(re.findall("python|javascript|rust|go|c\+\+", message.content, re.IGNORECASE))
We can find the languages mentioned in the user replies using regular expressions:
re.findall() will return a list of strings that match our expression. This way, whether the user replies with "Please add me to the Python and Go groups" or just "python go", we'll be able to assign them the right role.
We convert the list into a set in order to remove duplicates.
The next thing we need to do is deal with emoji responses. Add the following code to the bottom of the
assign_roles() function:
language_emojis = set(re.findall("\U0001F40D|\U0001F578|\U0001F980|\U0001F439|\U0001F409", message.content))
#
# Convert emojis to names
for emoji in language_emojis:
{
"\U0001F40D": lambda: languages.add("python"),
"\U0001F578": lambda: languages.add("javascript"),
"\U0001F980": lambda: languages.add("rust"),
"\U0001F439": lambda: languages.add("go"),
"\U0001F409": lambda: languages.add("c++")
}[emoji]()
In the first line, we do the same regex matching we did with the language names, but using emoji Unicode values instead of standard text. You can find a list of emojis with their codes on Unicode.org. Note that the
+ in this list's code should be replaced with
000 in your Python code: for example,
U+1F40D becomes
U0001F40D.
Once we've got our set of emoji matches in
language_emojis, we loop through it and use a dictionary to add the correct name to our
languages set. This dictionary has strings as values and lambda functions as keys. Finally,
[emoji]() will select the lambda function for the provided key and execute it, adding a value to
languages. This is similar to the switch-case syntax you may have seen in other programming languages.
We now have a full list of languages our users may wish to discuss. Add the following code below the
for loop:
if languages:
server = bot.get_guild(SERVER_ID)
roles = [discord.utils.get(server.roles, name=language.lower()) for language in languages]
member = await server.fetch_member(message.author.id)
This code first checks that the
languages set contains values. If so, we use
get_guild() to retrieve a
Guild object corresponding to our server's ID (remember, guild means server).
We then use a list comprehension and discord.py's
get() function to construct a list of all the roles corresponding to languages in our list. Note that we've used the
lower() to ensure all of our strings are in lowercase.
Finally, we retrieve the
member object corresponding to the user who sent us the message and our server.
We now have everything we need to assign roles. Add the following code to the bottom of the
if statement, within the body of the
if statement:
try:
await member.add_roles(*roles, reason="Roles assigned by WelcomeBot.")
except Exception as e:
print(e)
await message.channel.send("Error assigning roles.")
else:
await message.channel.send(f"""You've been assigned the following role{"s" if len(languages) > 1 else ""} on {server.name}: { ', '.join(languages) }.""")
The
member object's
add_roles() method takes an arbitrary number of
role objects as positional arguments. We unpack our
languages set into separate arguments using the
* operator, and provide a string for the named argument
reason.
Our operation is wrapped in a try-except-else block. If adding roles fails, we'll print the resulting error to our repl's console and send a generic error message to the user. If it succeeds, we'll send a message to the user informing them of their new roles, making extensive use of string interpolation.
Finally, we need to deal with the case where no languages were found in the user's message. Add an
else: block onto the bottom of the
if languages: block as below:
else:
await message.channel.send("No supported languages were found in your message.")
Rerun your repl and return to your Discord server. Open the DM channel with your bot and try sending it one or more language names or emojis. You should receive the expected roles. You can check this by clicking on your name in the right-hand panel on your Discord server – your roles will be listed in the box that appears.
Removing roles
Our code currently does not allow users to remove roles from themselves. While we could do this manually as the server owner, we've built this bot to avoid having to do that sort of thing, so let's expand our code to allow for role removal.
To keep things simple, we'll remove any roles mentioned by the user which they already have. So if a user with the "python" role writes "c++ python", we'll add the "c++" role and remove the "python" role.
Let's make some changes. Find the
if languages: block in your
assign_roles() function and change the code above
try: to match the below:
if languages:
server = bot.get_guild(SERVER_ID)
# <-- RENAMED VARIABLE + LIST CHANGED TO SET
new_roles = set([discord.utils.get(server.roles, name=language.lower()) for language in languages])
member = await server.fetch_member(message.author.id)
# NEW CODE BELOW
current_roles = set(member.roles)
We replace the list of roles with a set of new roles. We also create a set of roles the user current holds. Given these two sets, we can figure out which roles to add and which to remove using set operations. Add the following code below the definition of
current_roles:
roles_to_add = new_roles.difference(current_roles)
roles_to_remove = new_roles.intersection(current_roles)
The roles to add will be roles that are in
new_roles but not in
current_roles, i.e. the difference of the sets. The roles to remove will be roles that are in both sets, i.e. their intersection.
Now we need to replace the try-except-else block with the code below:
try:
await member.add_roles(*roles_to_add, reason="Roles assigned by WelcomeBot.")
await member.remove_roles(*roles_to_remove, reason="Roles revoked by WelcomeBot.")
except Exception as e:
print(e)
await message.channel.send("Error assigning/removing roles.")
else:
if roles_to_add:
await message.channel.send(f"You've been assigned the following role{'s' if len(roles_to_add) > 1 else ''} on {server.name}: { ', '.join([role.name for role in roles_to_add]) }")
if roles_to_remove:
await message.channel.send(f"You've lost the following role{'s' if len(roles_to_remove) > 1 else ''} on {server.name}: { ', '.join([role.name for role in roles_to_remove]) }")
This code follows the same general logic as our original block, but can remove roles as well as add them.
Finally, we need to update the bot's original DM to reflect this new functionality. Find the
dm_about_roles() function and amend it as follows:.
Reply with the name or emoji of a language you're currently using and want to stop and I'll remove that role for you.
"""
)
Rerun your repl and test it out. You should be able to add and remove roles from yourself. Try inviting some of your friends to your Discord server, and have them use the bot as well. They should receive DMs as soon as they join.
Where next?
We've created a simple Discord server welcome bot. There's a lot of scope for additional functionality. Here are some ideas for expansion:
- Include more complex logic for role assignment. For example, you could have some roles that require users to have been members of the server for a certain amount of time.
- Have your bot automatically assign additional user roles based on behavior. For example, you could give a role to users who react to messages with the most emojis.
- Add additional commands. For example, you might want to have a command that searches Stack Overflow, allowing members to ask programming questions from the chat.
Discord bot code can be hosted on Replit permanently, but you'll need to use an Always-on repl to keep it running 24/7.
You can find our repl below:
|
https://docs.replit.com/tutorials/discord-role-bot
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
Welcome to the Core Java Technologies Tech Tips for December 14, 2004. Here you'll get tips on using core Java technologies and APIs, such as those in Java 2 Platform, Standard Edition (J2SE).
This issue covers:
Resource Bundle Loading
Hiding ListResourceBundles from javadoc
ListResourceBundles
javadoc Java technology source for developers. Get the latest Java platform releases, tutorials, newsletters and more.
java.net - A web forum where enthusiasts of Java technology can collaborate and build solutions together.
java.com - The ultimate marketplace promoting Java technology,
applications and services.
A resource bundle is a way of embedding text strings in a language-specific (or more precisely, locale-specific) manner. An earlier Tech Tip discussed the use of resource bundles. What follows is a short refresher. If you have a program that needs a string such as "Hello, World", one approach is to code it in the program. However with resource bundles, you don't hardcode the string. Instead, you put the string in a lookup table, and then your program looks up the string at runtime. If the program runs with a different locale, the lookup finds a different string, if translated, or finds the original string if not translated. This doesn't affect the code in your program -- it runs with the same code, irrespective of locale. The only thing you need to do is create and translate the lookup table of values.
As stated previously, resource bundles work with locales. You can say, "I want the 'greet' string for English," where English is the locale. Or, you can say you want 'color' for U.S. English, and 'colour' for U.K. English. Locales also support regionality. In other words, you can specify a phrase for one dialect of U.S. English (perhaps a phrase used in Southern California), and a different phrase for another U.S. region, say New York City.
You can define a resource bundle in a .class file that extends ListResourceBundle, or you can use a PopertyResourceBundle that is backed by a .properties file. When combining resource bundles and locales, there are two searches involved. The first finds the nearest resource bundle requested, the second finds the string for the requested key. Why the differentiation? When searching for resource bundles, the system stops as soon as it finds and loads the requested resource bundle. If the system doesn't find the key in the requested bundle, it then hunts in other resource bundles until it finds the key. Ultimately, if it doesn't find the key, the system throws a MissingResourceException.
.class
ListResourceBundle
PopertyResourceBundle
.properties
MissingResourceException
To demonstrate, suppose you want to find a string for a New York locale in a bundle named Greeting. Suppose too that your Locale was created as follows:
Locale newYork = new Locale("en", "US", "NewYork")
and you asked for a resource bundle like this:
ResourceBundle bundle =
ResourceBundle.getBundle("Greeting", newYork);
The system first looks for the .class file for the bundle. With a region/variant level of locale, such as New York, the file would be Greeting_en_US_NewYork.class. If the system can't find the .class file in the classpath, it then searches for the file Greeting_en_US_NewYork.properties. And if it can't find that file, the system subsequently searches for Greeting_en_US.class, followed by Greeting_en_US.properties, Greeting_en.class, Greeting_en.properties, Greeting.class, and Greeting.properties. The searching stops when the system finds the resource bundle. Thankfully, there is caching involved, so the system doesn't always search everywhere, but that's still potentially a lot of different places that have to be searched.
Greeting_en_US_NewYork.class
Greeting_en_US_NewYork.properties
Greeting_en_US.class
Greeting_en_US.properties
Greeting_en.class
Greeting_en.properties
Greeting.class
Greeting.properties
The system then performs a second round of lookups -- this time for the requested key. If the key isn't in the bundle it found, the system looks for more resource bundles, beyond the language, country, and variant level of the current bundle. This could load more bundles, whether they are .class files or .properties files.
One question you might have is which approach is better, using .class files or using .properties files? Notice that .class files are searched for first, then .properties files. Also note that .class files are loaded directly by the class loader, but .properties files have to be parsed each time the bundle needs to be loaded. Parsing is a two-pass process. To deal with Unicode strings such as \uXXX, the system must scan each key=value line twice, and then split the key from the value.
\uXXX
key=value
Let's investigate both approaches further by comparing load times. Start with the following test program:
import java.util.*;
public class Test1 {
public static void main(String args[]) {
Locale locale = Locale.ENGLISH;
long start = System.nanoTime();
ResourceBundle myResources =
ResourceBundle.getBundle("MyResources", locale);
long end1 = System.nanoTime();
String string = myResources.getString("HelpKey");
long end2 = System.nanoTime();
System.out.println("Load: " + (end1 - start));
System.out.println("Fetch: " + (end2 - end1));
System.out.println("HelpKey: " + string);
}
}
If you are running on a 1.4 Java platform, you need to change the test program so that it calls currentTimeMillis instead of nanoTime. The nanoTime method works with nanosecond precision. The currentTimeMillis works only in milliseconds. Also, see the note about microbenchmarks at the end of this tip.
currentTimeMillis
nanoTime
Next, create a ListResourceBundle class in the same directory as the test program:
import java.util.*;
public class MyResources extends ListResourceBundle {
public Object[][] getContents() {
return contents;
}
private static final Object[][] contents = {
{"OkKey", "OK"},
{"CancelKey", "Cancel"},
{"HelpKey", "Help"},
{"YesKey", "Yes"},
{"NoKey", "No"},
};
}
Compile the test program and the MyResources class. Then run the test program.
MyResources
Your results will depends on your operating environment, your RAM size, and the speed of your processor. Here's a result produced in a 800 MHz machine running Windows XP with 768 MB RAM:
Load: 25937415
Fetch: 62994
Now create a properties file, MyResources.properties, with the following elements:
MyResources.properties
OkKey=OK
CancelKey=Cancel
HelpKey=Help
YesKey=Yes
NoKey=No
Run the test program again, but first remove the MyResources class. This will run the program using the .properties files. Here's the result produced in the same machine as before:
Load: 101469357
Fetch: 35450
The load times show the ListResourceBundle approach is faster than the PropertyResourceBundle approach. But, surprisingly, the fetch times show that the PropertyResourceBundle approach is almost twice as fast as ListResourceBundle approach. With roughly a five times difference in loading and a two times difference in fetching, you'd have to do a lot of fetches to catch up. Keep in mind that a nanosecond is a billionth of a second and a millisecond is a thousandth of a second.
PropertyResourceBundle
Now run the tests again, but this time use 100 elements in the .class and .properties files. To create the file, you can simply copy the five elements in the previous files 20 times, and change the entries slightly with each copy. For example, change OkKey to OkKey1, CancelKey to CancelKey1, and so on. Your results should follows the earlier results. Loading should be faster with the ListResourceBundle, but fetching should be faster with the PropertyResourceBundle. Actually, you should find that the load time of 100 resources for a PropertyResourceBundle is close to that of five elements.
OkKey
OkKey1
CancelKey
CancelKey1
ListResourceBundle
Load: 12782686
Fetch: 262788
PropertyResourceBundle
Load: 12600795
Fetch: 35175
Changing the Locale from language (Locale.ENGLISH) to language
and country (Locale.US) produces even more interesting results:
Locale.ENGLISH
Locale.US
ListResourceBundle
Load: 13152117
Fetch: 32921
PropertyResourceBundle
Load: 14592024
Fetch: 261060
ListResourceBundle:
Load: 12837863
Fetch: 264264
PropertyResourceBundle
Load: 14468366
Fetch: 33166
In all cases, while loading the initial bundle is always faster for the ListResourceBundle, fetching is sometimes slower. So which way do you go? For smaller resource bundles, the ListResourceBundle does seem to be the faster of the two. For larger ones, it seems best to stay away from ListResourceBundle. The ListResourceBundle needs to convert the two-dimensional array into a lookup map, that's the reason for the slower time.
Looking at these results, you might think that a ListResourceBundle should never be used. For instance, for a server-based program, it is easier to maintain a .properties file than a .class file, and the load time is negligible. But, a ListResourceBundle is not just a two-dimensional array of strings. The getContents method returns an Object array:
getContents
Object
public Object[][] getContents()
What does this mean? If you want to localize content beyond simply strings, you must use ListResourceBundle objects. This allows you to localize content such as images, colors, and dimensions. You can't have any object in a PropertyResourceBundle, only strings.
Note that the timing test in the sample program can be considered a microbenchmark. It can certainly be improved. However, with the caching of resource bundle loading, it's hard to get accurate load times when looping multiple times in the same run. Multiple runs should be used to validate results. For information on techniques for writing microbenchmarks, see the JavaOne 2002 presentation How NOT To Write A Microbenchmark. In addition, a lot of performance work in this area has been done for JDK 5.0. Your numbers may differ substantially using Java 2 SDK, Standard Edition, v 1.4.x.
For additional information about working with resource bundles,
see the javadoc for the ResourceBundle class,
the internationalization trail in the Java Tutorial, and the Core Java Internationalization page.
The first Tech Tip in this issue, Resource Bundle Loading, made some performance comparisons between the ListResourceBundle approach and the PropertyResourceBundle approach. If you decide to take the ListResourceBundles approach instead of the alternative PropertyResourceBundle route, there is one more thing to consider.. How to you address this issue? In fact, is there a way to hide ListResourceBundles from javadoc? This tip shows you a way to do that.
By default, the javadoc tool supports two options for suppressing classes from the output. You can specify a list of all the classes in a file and direct the tool to run javadoc on this fixed set. Or you can place all the resource bundles in a package and then direct the tool to run on a set of packages that ignores the package in which the resource bundle is located. The first technique is cumbersome -- maintaining the list is difficult. The second technique prevents you from keeping the resource bundles in the same directories as the source that uses them.
So how can you customize javadoc to ignore specific classes when generating its output? The answer is that instead of generating a complete list of classes (with the resource bundle classes missing), you simply provide a list of resource bundle classes.
This solution works for both the 1.4 and 5.0 releases of J2SE. To do this you run a doclet that accepts an option, -excludefile, which excludes a set of classes that you specify. Here's how you run the doclet (note that the command should go on one line):
-excludefile
java -classpath <path to doclet and path to tools.jar>
ExcludeDoclet -excludefile <path to exclude file>
<javadoc options>
In response to the command, the validOptions method of the Doclet class looks for the -excludefile option. If it finds it, the method reads the contents of the exclude file -- these are the set of classes and packages to ignore. Then the start method is called. As each class or package is processed, the method throws away the classes and packages in the exclude set. The doclet includes the optionLength method, this allows the doclet to run under both J2SE 1.4 and 5.0. Here is the doclet, ExcludeDoclet:
validOptions
Doclet
start
optionLength
ExcludeDoclet
import java.io.*;
import java.util.*;
import com.sun.tools.javadoc.Main;
import com.sun.javadoc.*;
/**
* A wrapper for Javadoc. Accepts an additional option
* called "-excludefile", which specifies which classes
* and packages should be excluded from the output.
*
* @author Jamie Ho
*/
public class ExcludeDoclet extends Doclet {
private static List m_args = new ArrayList();
private static Set m_excludeSet = new HashSet();
/**
* Iterate through the documented classes and remove the
* ones that should be excluded.
*
* @param root the initial RootDoc (before filtering).
*/
public static boolean start(RootDoc root) {
root.printNotice
("\n\nRemoving excluded source files.......\n\n");
ClassDoc[] classes = root.classes();
for (int i = 0; i < classes.length; i++) {
if (m_excludeSet.contains(classes[i].qualifiedName()) ||
m_excludeSet.contains
(classes[i].containingPackage().name())) {
root.printNotice
("Excluding " + classes[i].qualifiedName());
continue;
}
m_args.add(classes[i].position().file().getPath());
}
root.printNotice("\n\n");
return true;
}
/**
* Let every option be valid. The real validation happens
* in the standard doclet, not here. Remove the "-excludefile"
* and "-subpackages" options because they are not needed by
* the standard doclet.
*
* @param options the options from the command line.
* @param reporter the error reporter.
*/
public static boolean validOptions(String[][] options,
DocErrorReporter reporter) {
for (int i = 0; i < options.length; i++) {
if (options[i][0].equalsIgnoreCase("-excludefile")) {
try {
readExcludeFile(options[i][1]);
} catch (Exception e) {
e.printStackTrace();
}
continue;
}
if (options[i][0].equals("-subpackages")) {
continue;
}
for (int j = 0; j < options[i].length; j++) {
m_args.add(options[i][j]);
}
}
return true;
}
/**
* Parse the file that specifies which classes and packages
* to exclude from the output. You can write comments in this
* file by starting the line with a '#' character.
*
* @param filePath the path to the exclude file.
*/
private static void readExcludeFile(String filePath)
throws Exception {
LineNumberReader reader =
new LineNumberReader(new FileReader(filePath));
String line;
while ((line = reader.readLine()) != null) {
if (line.trim().startsWith("#"))
continue;
m_excludeSet.add(line.trim());
}
}
/**
* Method required to validate the length of the given option.
* This is a bit ugly but the options must be hard coded here.
* Otherwise, Javadoc will throw errors when parsing options.
* We could delegate to the Standard doclet when computing
* option lengths, but then this doclet would be dependent on
* the version of J2SE used. I'd rather hard code so that
* this doclet can be used with 1.4.x or 1.5.x.
*
* @param option the option to compute the length for.
*/
public static int optionLength(String option) {
if (option.equalsIgnoreCase("-excludefile")) {
return 2;
}
//General options
if (option.equals("-author") ||
option.equals("-docfilessubdirs") ||
option.equals("-keywords") ||
option.equals("-linksource") ||
option.equals("-nocomment") ||
option.equals("-nodeprecated") ||
option.equals("-nosince") ||
option.equals("-notimestamp") ||
option.equals("-quiet") ||
option.equals("-xnodate") ||
option.equals("-version")) {
return 1;
} else if (option.equals("-d") ||
option.equals("-docencoding") ||
option.equals("-encoding") ||
option.equals("-excludedocfilessubdir") ||
option.equals("-link") ||
option.equals("-sourcetab") ||
option.equals("-noqualifier") ||
option.equals("-output") ||
option.equals("-sourcepath") ||
option.equals("-tag") ||
option.equals("-taglet") ||
option.equals("-tagletpath")) {
return 2;
} else if (option.equals("-group") ||
option.equals("-linkoffline")) {
return 3;
}
//Standard doclet options
option = option.toLowerCase();
if (option.equals("-nodeprecatedlist") ||
option.equals("-noindex") ||
option.equals("-notree") ||
option.equals("-nohelp") ||
option.equals("-splitindex") ||
option.equals("-serialwarn") ||
option.equals("-use") ||
option.equals("-nonavbar") ||
option.equals("-nooverview")) {
return 1;
} else if (option.equals("-footer") ||
option.equals("-header") ||
option.equals("-packagesheader") ||
option.equals("-doctitle") ||
option.equals("-windowtitle") ||
option.equals("-bottom") ||
option.equals("-helpfile") ||
option.equals("-stylesheetfile") ||
option.equals("-charset") ||
option.equals("-overview")) {
return 2;
} else {
return 0;
}
}
/**
* Execute this doclet to filter out the unwanted classes
* and packages. Then execute the standard doclet.
*
* @param args The Javadoc arguments from the command line.
*/
public static void main(String[] args) {
String name = ExcludeDoclet.class.getName();
Main.execute(name, name, args);
Main.execute((String[]) m_args.toArray(new String[] {}));
}
}
Compile the doclet as follows:
javac -classpath tools.jar ExcludeDoclet.java
Replace tools.jar with the appropriate location of your JDK installation. For example, if you're running in the Windows environment and your JDK is installed in the c:\jdk1.5.0 directory, specify c:\jdk1.5.0\lib\tools.jar.
tools.jar
c:\jdk1.5.0 directory, specify c:\jdk1.5.0\lib\tools.jar
Next, create a file such as skip.txt to identify which classes to skip. Normally, this would be your set of ListResourceBundle subclasses. For this example, run ExcludeDoclet with the standard JDK classes, and ignore a set in the java.lang package:
skip.txt
java.lang
java.lang.Math
java.lang.Long
java.lang.InternalError
java.lang.InterruptedException
java.lang.Iterable
java.lang.LinkageError
Then run the following command (on one line):
java -classpath .;c:\jdk1.5.0\lib\tools.jar ExcludeDoclet
-d docs -excludefile skip.txt -sourcepath c:\jdk1.5.0\src
-source 1.5 java.lang
The command will generate the javadoc for the java.lang package, excluding the six classes and interfaces identified in skip.txt.
Here is part of the generated javadoc showing the interfaces in the java.lang package. Notice that the Iterable interface is excluded.
Iterable
For additional information about creating custom doclets, see the
tip Generating Custom Doclets.
|
http://java.sun.com/developer/JDCTechTips/2004/tt1214.html
|
crawl-001
|
en
|
refinedweb
|
// Somo ef the butt-ugly Generics syntax. // Kennel requires two classes, much like Map. // Here & means "and also" nothing at all to do with its regular meaning of logical intersection. // Also note the required "extends" an interface Comparable instead of "implements" it. public class Kennel <D extends Comparable<D> & Serializable, C>
|
http://www.mindprod.com/jgloss/snippet/iframe/generics.example101.javafrag.html
|
crawl-001
|
en
|
refinedweb
|
This blog demonstrates how to create custom filter in AngularJS.
Introduction
Getting Started
Creating AngularJS custom filter as is as declaring controller in AngularJS. The filter factory function of module helps to create custom filter. It takes string value as first parameter for custom filter name and the second parameter is name of a function where filteration logic will be applied. Filter names must be valid AngularJS Expressions identifiers, such as uppercase or orderBy. Names with special characters, such as hyphens and dots, are not allowed. If you wish to namespace your filters, then you can use capitalization 'myappSubsectionFilterx' or underscores 'myapp_subsection_filterx'. The filter function should be a pure function, which means that it should always return the same result given the same input arguments and should not affect external state, the syntax of filter is same as controller, see the below syntax for custom filter.Example
|
https://www.c-sharpcorner.com/blogs/custom-filter-in-angular
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
Hide Forgot
Description of problem:
when compiling libvirt-java JNI code while using java-1.6.0-openjdk-devel
for the stub generation and headers, the resulting C stub code compiled with
gcc uses jni.h and jni_md.h from gcj-devel if that one is installed.
Problem is that they conflict too:
make[4]: Entering directory `/u/veillard/libvirt-java/src/jni'
/bin/sh ../../libtool --tag=CC --mode=compile -o
libvirt_jni_la-org_libvirt_VirNetwork.lo `test -f 'org_libvirt_VirNetwork.c' ||
echo './'`org_libvirt_VirNetwork.c
mkdir .libs org_libvirt_VirNetwork.c
-fPIC -DPIC -o .libs/libvirt_jni_la-org_libvirt_VirNetwork.o
In file included from org_libvirt_VirNetwork.h:3,
from org_libvirt_VirNetwork.c:2:
/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/include/jni.h:57: error:
conflicting types for 'jboolean'
/usr/lib/gcc/x86_64-redhat-linux/4.3.0/include/jni_md.h:81: error: previous
declaration of 'jboolean' was here
make[4]: *** [libvirt_jni_la-org_libvirt_VirNetwork.lo] Error 1
Version-Release number of selected component (if applicable):
libgcj-devel-4.3.0-8
java-1.6.0-openjdk-devel-1.6.0.0-0.15.b09.fc9.x86_64
How reproducible:
Steps to Reproduce:
1. select openjdk with alternatives
2. try to compile libvirt-java code
3.
Actual results:
conflict of header
Expected results:
no conflict
Additional info:
the includes should probably not be exported to the compiler by default,
or if it is the case they should be made compatible between the two
versions
IMNSHO it is perfectly fine as is, I don't see a reason why it should be hidden
in any way.
When you compile against openjdk, just make sure its include directories are all
mentioned in -I options. The above sounds like you are including some gcj-devel
header (as jni_md.h is gcc specific, while jni.h is not).
Preprocessed source (perhaps with -E -dD even better) might reveal where exactly
is the problem.
Yes the system specific $JAVA_HOME/include/linux include path was missing
on the command line.
Still i still consider a bug that normal gcc exports GCJ JNI header paths
especially when the OpenJDK and GCJ includes are not compatible. that's
call header pollution IMHO, you're spreading into a namespace where you
don't have control. Using gcc as the compiler should not mean you might be
using the gcj JNI include files. that looks just like a recipe for
broken compile, broken projects (because their JNI code compile even though
they forgot to add the include paths).
I'm ready to bet that fixing that bug will expose various problems in the
way JNI bindings are compiled left and right...
Daniel
Not So humble Opinion indeed... switching to CLOSED NOTABUG while not even
looking at the incompatibility of the headers means you really don't want
to be bothered ... I did post that bug after having asked feedback from
people on the java and tools internal channels, and people stated the
gcj headers really should be fixed because a guard for header inclusion was
missing. You don't care, okay
Daniel
|
https://partner-bugzilla.redhat.com/show_bug.cgi?id=453572
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
First things first: Dialog boxes are just Forms that are called or started differently and can, if you want, pass and/or return data and return a DialogResult. That's it! Forget what you once knew about dialog boxes (if you were a classic Visual C++ MFC programmer)—things have gotten a lot easier.
Everything that you've learned so far in this chapter works the same for dialog boxes. All you need to do is learn a couple of optional features and how to call the dialog box itself, and then you'll know all you need to develop dialog boxes.
Building a custom dialog box is almost exactly the same as creating the main Win Form, except it requires two additional steps. Here are the steps you follow to create a custom dialog box:
Right-click the project folder within Solution Explorer.
Select Add New Item from the drop-down menu item Add. A dialog box similar to the one in Figure 10-16 appears.
Figure 10-16: The Add New Item dialog box
Select the Windows Form (.NET) icon from the Templates panel and give the dialog box a name. I used MyDialog.
Click Open. This will provide you with an empty form in the design window.
Build the form exactly as you do the main form.
You can now work with this form in exactly the same way as you do with the application's main form, except for a couple of minor things.
The first minor difference is that if you want to pass information to the dialog box or get information back from the dialog box, you need to add properties to your form to get and set the information:
public: __property void set_PassedValue(String *value) // PassedValue property { tbPassedValue->Text = value; } __property String *get_PassedValue() { return tbPassedValue->Text; }
Another method of doing this would be to change the constructor to send data to the dialog box, but I prefer properties. Plus, if you use the constructor to pass data to the dialog box, you still need to create properties or methods to send data back, so why not bite the bullet and use properties in both cases? This method is clean and safe (because you can verify the validity of the passed data) and it's easy to use.
The second change that you can make, which is totally optional, is to change the style of the dialog box to look more like a dialog box and less like a form:
this->FormBorderStyle = System::Windows::Forms::FormBorderStyle::FixedToolWindow; // Or this->FormBorderStyle = System::Windows::Forms::FormBorderStyle::SizableToolWindow;
The third difference is that you want to have any buttons that close your dialog box return a DialogResult. The .NET Framework class library provides a number of possible DialogResults (see Table 10-2).
To return a DialogResult value to the calling form, you need to assign, to the button that will end the dialog, the desired DialogResult value:
bnOK->DialogResult = DialogResult::OK;
When the button is clicked, it will automatically return the DialogResult it was set to (DialogResult::OK is set in the preceding code). By the way, you can still handle the Click event, if you need to, for the button. (You can even change its DialogResult in the handler if you really want to. For example, you could turn DialogResult::OK into DialogResult::Cancel if no text is entered in the dialog box.)
The final change you are probably going to want to make is to assign default buttons to respond to the Accept and Cancel conditions. You do this by assigning a button to the form's AcceptButton and CancelButton properties:
AcceptButton = bnOK; CancelButton = bnCancel;
Once you have performed the preceding additional steps, you have a complete custom dialog box. Listing 10-14 shows the code of a custom dialog box that takes in some text, places it in a text box, allows it to be updated, and then returns the text back updated to the calling form. The dialog box also allows the user to abort or cancel the dialog box.
Listing 10-14: The MyDialog Class File
#pragma once using namespace System; using namespace System::ComponentModel; using namespace System::Collections; using namespace System::Windows::Forms; using namespace System::Data; using namespace System::Drawing; namespace CustomDialog { public __gc class MyDialog : public System::Windows::Forms::Form { public: MyDialog(void) { InitializeComponent(); } public: __property void set_PassedValue(String *value) // PassedValue property { tbPassedValue->Text = value; } __property String *get_PassedValue() { return tbPassedValue->Text; } protected: void Dispose(Boolean disposing) { if (disposing && components) { components->Dispose(); } __super::Dispose(disposing); } private: System::Windows::Forms::Button * bnOK; private: System::Windows::Forms::Button * bnAbort; private: System::Windows::Forms::Button * bnCancel; private: System::Windows::Forms::TextBox * tbPassedValue; private: System::ComponentModel::Container * components; void InitializeComponent(void) { this->tbPassedValue = new System::Windows::Forms::TextBox(); this->bnOK = new System::Windows::Forms::Button(); this->bnAbort = new System::Windows::Forms::Button(); this->bnCancel = new System::Windows::Forms::Button(); this->SuspendLayout(); // // tbPassedValue // this->tbPassedValue->Location = System::Drawing::Point(15, 25); this->tbPassedValue->Name = S"tbPassedValue"; this->tbPassedValue->Size = System::Drawing::Size(250, 22); this->tbPassedValue->TabIndex = 0; this->tbPassedValue->Text = S""; // // bnOK // this->bnOK->DialogResult = System::Windows::Forms::DialogResult::OK; this->bnOK->Location = System::Drawing::Point(15, 72); this->bnOK->Name = S"bnOK"; this->bnOK->TabIndex = 1; this->bnOK->Text = S"OK"; // // bnAbort // this->bnAbort->DialogResult = System::Windows::Forms::DialogResult::Abort; this->bnAbort->Location = System::Drawing::Point(104, 72); this->bnAbort->Name = S"bnAbort"; this->bnAbort->TabIndex = 2; this->bnAbort->Text = S"Abort"; // // bnCancel // this->bnCancel->DialogResult = System::Windows::Forms::DialogResult::Cancel; this->bnCancel->Location = System::Drawing::Point(192, 72); this->bnCancel->Name = S"bnCancel"; this->bnCancel->TabIndex = 3; this->bnCancel->Text = S"Cancel"; // // MyDialog // this->AcceptButton = this->bnOK; this->AutoScaleBaseSize = System::Drawing::size(6, 15); this->CancelButton = this->bnCancel; this->ClientSize = System::Drawing::Size(300, 120); this->Controls->Add(this->bnCancel); this->Controls->Add(this->bnAbort); this->Controls->Add(this->bnOK); this->Controls->Add(this->tbPassedValue); this->FormBorderStyle = System::Windows::Forms::FormBorderStyle::FixedToolWindow; this->Name = S"MyDialog"; this->Text = S"My Custom Dialog"; this->ResumeLayout(false); } }; }
Figure 10-17 shows what the preceding example looks like when you execute it.
Figure 10-17: A custom dialog box
Now let's take a look at the code to implement a custom dialog box (see Listing 10-15). The example calls the dialog box by clicking anywhere in the form.
Listing 10-15: Implementing a Custom Dialog Box
namespace CustomDialog { using namespace System; using namespace System::ComponentModel; using namespace System::Collections; using namespace System::Windows::Forms; using namespace System::Data; using namespace System::Drawing; public __gc class Form1 : public System::Windows::Forms::Form { public: Form1(void) //... protected: void Dispose(Boolean disposing) //... private: System::Windows::Forms::Label * lbRetVal; private: System::Windows::Forms::Label * lbRetString; private: System::ComponentModel::Container * components; void InitializeComponent(void) { this->lbRetVal = new System::Windows::Forms::Label(); this->lbRetString = new System::Windows::Forms::Label(); this->SuspendLayout(); // // lbRetVal // this->lbRetVal->Location = System::Drawing::Point(32, 40); this->lbRetVal->Name = S"lbRetVal"; this->lbRetVal->Size = System::Drawing::Size(224, 23); // // lbRetString // this->lbRetString->Location = System::Drawing::Point(32, 88); this->lbRetString->Name = S"lbRetString"; this->lbRetString->Size = System::Drawing::Size(224, 23); // // Form1 // this->AutoScaleBaseSize = System::Drawing::Size(6, 15); this->ClientSize = System::Drawing::Size(292, 270); this->Controls->Add(this->lbRetString); this->Controls->Add(this->lbRetVal); this->Name = S"Form1"; this->Text = S"Click Form to get dialog"; this->Click += new System::EventHandler(this, Form1_Click); this->ResumeLayout(false); } private: System::Void Form1_Click(System::Object * sender, System::EventArgs * e) { MyDialog *mydialog = new MyDialog(); mydialog->PassedValue = S"This has been passed from Form1"; if (mydialog->ShowDialog() == DialogResult::OK) lbRetVal->Text = S"OK"; else if (mydialog->DialogResult == DialogResult::Abort) lbRetVal->Text = S"Abort"; else lbRetVal->Text = S"Cancel"; lbRetString->Text = mydialog->PassedValue; } }; }
Figure 10-18 shows what the preceding example looks like when you execute it.
Figure 10-18: Calling a custom dialog box
Not much of a change, is there? First, you create an instance of the dialog box:
MyDialog *mydialog = new MyDialog();
Optionally, you can pass all the data you want to the dialog box:
mydialog->PassedValue = S"This has been passed from Form1";
Then you call the dialog box in one of two ways:
ShowDialog()
Show()
The first mode, ShowDialog(), is modal. In this mode, you wait for the dialog box to finish before you continue processing. Normally, you would check the DialogResult upon exit, as you do in the example, but that is not necessary:
if (mydialog->ShowDialog() == DialogResult::OK) lbRetVal->Text = S"OK"; else if (mydialog->DialogResult == DialogResult::Abort) lbRetVal->Text = S"Abort"; else lbRetVal->Text = S"Cancel";
The second mode, Show(), is modeless. In this mode, the dialog box opens and then returns control immediately back to its caller. You now have two threads of execution running. I cover threads in Chapter 16.1 discuss modeless dialog boxes more then, but here is the code to start a modeless dialog box:
mydialog->Show();
The final thing you might do (again, this is optional) is grab the changed data out of the dialog box:
lbRetString->Text = mydialog->PassedValue;
By the way, I have been using Strings to pass data back and forth between the dialog box and the main application. This is not a restriction, though—you can use any data type you want.
When you've worked with Windows for any length of time, you soon come to recognize some common dialog boxes that many applications use. The .NET Framework class library provides you easy access to using these same dialog boxes in your programs. Table 10-3 shows a list of the available common dialog boxes.
You call the common dialog boxes in the same way you do the custom dialog box you just built. Listing 10-16 shows just how simple it is to call the ColorDialog. Calling all the other custom dialog boxes is done the same way.
Listing 10-16: Calling a Common ColorDialog
namespace ColorDlg { using namespace System; using namespace System::ComponentModel; using namespace System::Collections; using namespace System::Windows::Forms; using namespace System::Data; using namespace System::Drawing; public __gc class Form1 : public System::Windows::Forms::Form { public: Form1(void) //... protected: void Dispose(Boolean disposing) //... private: System::ComponentModel::Container * components; void InitializeComponent(void) { this->AutoScaleBaseSize = System::Drawing::Size(6, 15); this->ClientSize = System::Drawing::Size(292, 270); this->Name = S"Form1"; this->Text = S"Common Color Dialog - Click Form"; this->Click += new System::EventHandler(this, Form1_Click); } private: System::Void Form1_Click(System::Object * sender, System::EventArgs * e) { ColorDialog *colordialog = new ColorDialog(); if (colordialog->ShowDialog() == DialogResult::OK) { BackColor = colordialog->Color; } } }; }
There is nothing new or special here. First, check to make sure that the dialog box exited with the DialogResult of OK, and then set the color of the object you want changed with the value in the Color property of the ColorDialog.
Figure 10-19 shows what the example looks like when you execute it.
Figure 10-19: Calling a common ColorDialog
|
https://flylib.com/books/en/2.474.1.75/1/
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
Automating Your Feature Testing With Selenium WebDriver
This article is for web developers who wish to spend less time testing the front end of their web applications but still want to be confident that every feature works fine. It will save you time by automating repetitive online tasks with Selenium WebDriver. You will find a step-by-step example for automating and testing the login function of WordPress, but you can also adapt the example for any other login form.
What Is Selenium And How Can It Help You?
Selenium is a framework for the automated testing of web applications. Using Selenium, you can basically automate every task in your browser as if a real person were to execute the task. The interface used to send commands to the different browsers is called Selenium WebDriver. Implementations of this interface are available for every major browser, including Mozilla Firefox, Google Chrome and Internet Explorer.
Automating Your Feature Testing With Selenium WebDriver
Which type of web developer are you? Are you the disciplined type who tests all key features of your web application after each deployment. If so, you are probably annoyed by how much time this repetitive testing consumes. Or are you the type who just doesn’t bother with testing key features and always thinks, “I should test more, but I’d rather develop new stuff.” If so, you probably only find bugs by chance or when your client or boss complains about them.
I have been working for a well-known online retailer in Germany for quite a while, and I always belonged to the second category: It was so exciting to think of new features for the online shop, and I didn’t like at all going over all of the previous features again after each new software deployment. So, the strategy was more or less to hope that all key features would work.
One day, we had a serious drop in our conversion rate and started digging in our web analytics tools to find the source of this drop. It took quite a while before we found out that our checkout did not work properly since the previous software deployment.
This was the day when I started to do some research about automating our testing process of web applications, and I stumbled upon Selenium and its WebDriver. Selenium is basically a framework that allows you to automate web browsers. WebDriver is the name of the key interface that allows you to send commands to all major browsers (mobile and desktop) and work with them as a real user would.
Preparing The First Test With Selenium WebDriver
First, I was a little skeptical of whether Selenium would suit my needs because the framework is most commonly used in Java, and I am certainly not a Java expert. Later, I learned that being a Java expert is not necessary to take advantage of the power of the Selenium framework.
As a simple first test, I tested the login of one of my WordPress projects. Why WordPress? Just because using the WordPress login form is an example that everybody can follow more easily than if I were to refer to some custom web application.
What do you need to start using Selenium WebDriver? Because I decided to use the most common implementation of Selenium in Java, I needed to set up my little Java environment.
If you want to follow my example, you can use the Java environment of your choice. If you haven’t set one up yet, I suggest installing Eclipse and making sure you are able to run a simple “Hello world” script in Java.
Because I wanted to test the login in Chrome, I made sure that the Chrome browser was already installed on my machine. That’s all I did in preparation.
Downloading The ChromeDriver
All major browsers provide their own implementation of the WebDriver interface. Because I wanted to test the WordPress login in Chrome, I needed to get the WebDriver implementation of Chrome: ChromeDriver.
I extracted the ZIP archive and stored the executable file
chromedriver.exe in a location that I could remember for later.
Setting Up Our Selenium Project In Eclipse
The steps I took in Eclipse are probably pretty basic to someone who works a lot with Java and Eclipse. But for those like me, who are not so familiar with this, I will go over the individual steps:
- Open Eclipse.
- Click the "New" icon.
Creating a new project in Eclipse
- Choose the wizard to create a new "Java Project," and click “Next.”
Choose the java-project wizard.
- Give your project a name, and click "Finish."
The eclipse project wizard
- Now you should see your new Java project on the left side of the screen.
We successfully created a project to run the Selenium WebDriver.
Adding The Selenium Library To Our Project
Now we have our Java project, but Selenium is still missing. So, next, we need to bring the Selenium framework into our Java project. Here are the steps I took:
- Download the latest version of the Java Selenium library.
Download the Selenium library.
- Extract the archive, and store the folder in a place you can remember easily.
- Go back to Eclipse, and go to "Project" → “Properties.”
Go to properties to integrate the Selenium WebDriver in you project.
- In the dialog, go to "Java Build Path" and then to register “Libraries.”
- Click on "Add External JARs."
Add the Selenium lib to your Java build path.
- Navigate to the just downloaded folder with the Selenium library. Highlight all
.jarfiles and click "Open."
Select all files of the lib to add to your project.
- Repeat this for all
.jarfiles in the subfolder
libsas well.
- Eventually, you should see all
.jarfiles in the libraries of your project:
The Selenium WebDriver framework has now been successfully integrated into your project!
That’s it! Everything we’ve done until now is a one-time task. You could use this project now for all of your different tests, and you wouldn’t need to do the whole setup process for every test case again. Kind of neat, isn’t it?
Creating Our Testing Class And Letting It Open the Chrome Browser
Now we have our Selenium project, but what next? To see whether it works at all, I wanted to try something really simple, like just opening my Chrome browser.
To do this, I needed to create a new Java class from which I could execute my first test case. Into this executable class, I copied a few Java code lines, and believe it or not, it worked! Magically, the Chrome browser opened and, after a few seconds, closed all by itself.
Try it yourself:
- Click on the "New" button again (while you are in your new project’s folder).
Create a new class to run the Selenium WebDriver.
- Choose the "Class" wizard, and click “Next.”
Choose the Java class wizard to create a new class.
- Name your class (for example, "RunTest"), and click “Finish.”
The eclipse Java Class wizard.
- Replace all code in your new class with the following code. The only thing you need to change is the path to
chromedriver.exeon your computer:(); // Waiting a bit before closing Thread.sleep(7000); // Closing the browser and WebDriver webDriver.close(); webDriver.quit(); } }
- Save your file, and click on the play button to run your code.
Running your first Selenium WebDriver project.
- If you have done everything correctly, the code should open a new instance of the Chrome browser and close it shortly thereafter.
The Chrome Browser opens itself magically. (Large preview)
Testing The WordPress Admin Login
Now I was optimistic that I could automate my first little feature test. I wanted the browser to navigate to one of my WordPress projects, login to the admin area and verify that the login was successful. So, what commands did I need to look up?
- Navigate to the login form,
- Locate the input fields,
- Type the username and password into the input fields,
- Hit the login button,
- Compare the current page’s headline to see if the login was successful.
Again, after I had done all the necessary updates to my code and clicked on the run button in Eclipse, my browser started to magically work itself through the WordPress login. I successfully ran my first automated website test!
If you want to try this yourself, replace all of the code of your Java class with the following. I will go through the code in detail afterwards. Before executing the code, you must replace four values with your own:
The location of your
chromedriver.exefile (as above),
The URL of the WordPress admin account that you want to test,
The WordPress username,
The WordPress password.
Then, save and let it run again. It will open Chrome, navigate to the login of your WordPress website, login and check whether the
h1 headline of the current page is “Dashboard.”
import org.openqa.selenium.By;(); // Maximize the browser window webDriver.manage().window().maximize(); if (testWordpresslogin()) { System.out.println("Test Wordpress Login: Passed"); } else { System.out.println("Test Wordpress Login: Failed"); } // Close the browser and WebDriver webDriver.close(); webDriver.quit(); } private static boolean testWordpresslogin() { try { // Open google.com webDriver.navigate().to(""); // Type in the username webDriver.findElement(By.id("user_login")).sendKeys("YOUR_USERNAME"); // Type in the password webDriver.findElement(By.id("user_pass")).sendKeys("YOUR_PASSWORD"); // Click the Submit button webDriver.findElement(By.id("wp-submit")).click(); // Wait a little bit (7000 milliseconds) Thread.sleep(7000); // Check whether the h1 equals “Dashboard” if (webDriver.findElement(By.tagName("h1")).getText() .equals("Dashboard")) { return true; } else { return false; } // If anything goes wrong, return false. } catch (final Exception e) { System.out.println(e.getClass().toString()); return false; } } }
If you have done everything correctly, your output in the Eclipse console should look something like this:
Understanding The Code
Because you are probably a web developer and have at least a basic understanding of other programming languages, I am sure you already grasp the basic idea of the code: We have created a separate method,
testWordpressLogin, for the specific test case that is called from our main method.
Depending on whether the method returns true or false, you will get an output in your console telling you whether this specific test passed or failed.
This is not necessary, but this way you can easily add many more test cases to this class and still have readable code.
Now, step by step, here is what happens in our little program:
- First, we tell our program where it can find the specific WebDriver for Chrome.
System.setProperty("webdriver.chrome.driver","C:/PATH/TO/chromedriver.exe");
- We open the Chrome browser and maximize the browser window.
webDriver = new ChromeDriver(); webDriver.manage().window().maximize();
- This is where we jump into our submethod and check whether it returns true or false.
if (testWordpresslogin()) …
- The following part in our submethod might not be intuitive to understand:
The
try{…}catch{…}blocks. If everything goes as expected, only the code in
try{…}will be executed, but if anything goes wrong while executing
try{…}, then the execution continuous in
catch{}. Whenever you try to locate an element with
findElementand the browser is not able to locate this element, it will throw an exception and execute the code in
catch{…}. In my example, the test will be marked as "failed" whenever something goes wrong and the
catch{}is executed.
- In the submethod, we start by navigating to our WordPress admin area and locating the fields for the username and the password by looking for their IDs. Also, we type the given values in these fields.
webDriver.navigate().to(""); webDriver.findElement(By.id("user_login")).sendKeys("YOUR_USERNAME"); webDriver.findElement(By.id("user_pass")).sendKeys("YOUR_PASSWORD");
Selenium fills out our login form
- After filling in the login form, we locate the submit button by its ID and click it.
webDriver.findElement(By.id("wp-submit")).click();
- In order to follow the test visually, I include a 7-second pause here (7000 milliseconds = 7 seconds).
Thread.sleep(7000);
- If the login is successful, the
h1headline of the current page should now be "Dashboard," referring to the WordPress admin area. Because the
h1headline should exist only once on every page, I have used the tag name here to locate the element. In most other cases, the tag name is not a good locator because an HTML tag name is rarely unique on a web page. After locating the
h1, we extract the text of the element with
getText()and check whether it is equal to the string “Dashboard.” If the login is not successful, we would not find “Dashboard” as the current
h1. Therefore, I’ve decided to use the
h1to check whether the login is successful.
if (webDriver.findElement(By.tagName("h1")).getText().equals("Dashboard")) { return true; } else { return false; }
Letting the WebDriver check, whether we have arrived on the Dashboard: Test passed! (Large preview)
- If anything has gone wrong in the previous part of the submethod, the program would have jumped directly to the following part. The
catchblock will print the type of exception that happened to the console and afterwards return
falseto the main method.
catch (final Exception e) { System.out.println(e.getClass().toString()); return false; }
Adapting The Test Case
This is where it gets interesting if you want to adapt and add test cases of your own. You can see that we always call methods of the
webDriver object to do something with the Chrome browser.
First, we maximize the window:
webDriver.manage().window().maximize();
Then, in a separate method, we navigate to our WordPress admin area:
webDriver.navigate().to("");
There are other methods of the
webDriver object we can use. Besides the two above, you will probably use this one a lot:
webDriver.findElement(By. …)
The
findElement method helps us find different elements in the DOM. There are different options to find elements:
By.id
By.cssSelector
By.className
By.linkText
By.name
By.xpath
If possible, I recommend using
By.id because the ID of an element should always be unique (unlike, for example, the
className), and it is usually not affected if the structure of your DOM changes (unlike, say, the
xPath).
Note: You can read more about the different options for locating elements with WebDriver over here.
As soon as you get ahold of an element using the
findElement method, you can call the different available methods of the element. The most common ones are
sendKeys,
click and
getText.
We’re using
sendKeys to fill in the login form:
webDriver.findElement(By.id("user_login")).sendKeys("YOUR_USERNAME");
We have used
click to submit the login form by clicking on the submit button:
webDriver.findElement(By.id("wp-submit")).click();
And
getText has been used to check what text is in the
h1 after the submit button is clicked:
webDriver.findElement(By.tagName("h1")).getText()
Note: Be sure to check out all the available methods that you can use with an element.
Conclusion
Ever since I discovered the power of Selenium WebDriver, my life as a web developer has changed. I simply love it. The deeper I dive into the framework, the more possibilities I discover — running one test simultaneously in Chrome, Internet Explorer and Firefox or even on my smartphone, or taking screenshots automatically of different pages and comparing them. Today, I use Selenium WebDriver not only for testing purposes, but also to automate repetitive tasks on the web. Whenever I see an opportunity to automate my work on the web, I simply copy my initial WebDriver project and adapt it to the next task.
If you think that Selenium WebDriver is for you, I recommend looking at Selenium’s documentation to find out about all of the possibilities of Selenium (such as running tasks simultaneously on several (mobile) devices with Selenium Grid).
I look forward to hearing whether you find WebDriver as useful as I do!
|
https://www.smashingmagazine.com/2018/04/feature-testing-selenium-webdriver/
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
#include <remote_access.h>
Detailed Description
Wrapper for the org_kde_kwin_remote_access_manager interface.
This class provides a convenient wrapper for the org_kde_kwin_remote_access_manager interface.
To use this class one needs to interact with the Registry. There are two possible ways to create the RemoteAccessManager interface:
This creates the RemoteAccessManager and sets it up directly. As an alternative this can also be done in a more low level way:
The RemoteAccessManager can be used as a drop-in replacement for any org_kde_kwin_remote_access_manager pointer as it provides matching cast operators.
Definition at line 62 of file remote_access.h.
Constructor & Destructor Documentation
Creates a new RemoteAccessManager.
Note: after constructing the RemoteAccessManager it is not yet valid and one needs to call setup. In order to get a ready to use RemoteAccessManager prefer using Registry::createRemoteAccessManager.
Definition at line 78 of file remote_access.cpp.
Member Function Documentation
Destroys the data held by this RemoteAccess_remote_access_manager interface once there is a new connection available.
It is suggested to connect this method to ConnectionThread::connectionDied:
Definition at line 99 of file remote_access.cpp.
- Returns
- The event queue to use for creating objects with this RemoteAccessManager.
Definition at line 109 of file remote_access.cpp.
- Returns
trueif managing a org_kde_kwin_remote_access_manager.
Definition at line 122 of file remote_access.cpp.
Releases the org_kde_kwin_remote_access_manager interface.
After the interface has been released the RemoteAccessManager instance is no longer valid and can be setup with another org_kde_kwin_remote_access_manager interface.
Definition at line 94 of file remote_access.cpp.
The corresponding global for this interface on the Registry got removed.
This signal gets only emitted if the RemoteAccessManager got created by Registry::createRemoteAccessManager
Sets the
queue to use for creating objects with this RemoteAccessManager.
Definition at line 104 of file remote_access.cpp.
Setup this RemoteAccessManager to manage the
remoteaccessmanager.
When using Registry::createRemoteAccessManager there is no need to call this method.
Definition at line 89 of file remote_access.
|
https://api.kde.org/frameworks/kwayland/html/classKWayland_1_1Client_1_1RemoteAccessManager.html
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
Decoupled Communication with Prism (Event Aggregation)
In my previous post, I discussed commanding and how you can hook up invokers (e.g. buttons and menu items), global CompositeCommands, and module level command instances. If you need the sending of a message to be initiated by a user gesture such as a button click, Commands may be a good choice. If you need the sending of a message to be initiated by business logic code, such as in a controller or presenter, you should consider using the EventAggregator.
In the case where a button has been clicked, validation has passed, and work has been completed, it is often useful to alert the rest of the application that this event has occurred. Let's say the the "Process Order" button has been pressed. If the order is successfully processed, other modules may want to know so that they can update their views: "Processed Orders List", "Pending Orders List", "Available Inventory"... For this type of communication, we developed EventAggregator.
EventAggregator provides multicast publish/subscribe functionality. There can be multiple publishers that publish/fire/raise the same event and there can be many subscribers who listen to the same event. If you define the event in a common assembly, EventAggregator can be used across module assemblies.
Defining an event:
Defining an EventAggregator event is as simple as extending WpfEvent<TPayload> supplying the payload type.
public class TickerSymbolSelectedEvent : WpfEvent<string> { }
For the TickerSymbolSelectedEvent, publishers will need to provide a string argument, and subscribers can expect to receive a string argument.
EventAggregator Singleton Service:
Make sure that the EventAggregator instance is treated as a singleton. One way to do this is to register this type in your dependency injection container as a single instance and always use the container to resolve the EventAggregator.
unityContainer.RegisterType<IEventAggregator, EventAggregator>(new ContainerControlledLifetimeManager());
The EventAggregator is merely a singleton factory. When you call its GetEvent<TEventType> method, it will return an instance of TEventType that was previously requested, or create a new instance and keep track of it.
Publishing an event:
Use the singleton instance of the EventAggregator to get an instance of the event to which you want to work with. Call the event's Publish method supplying the message payload. The WpfEvent<TPayload> base class manages all the subscriptions and handles cross thread messaging.
eventAggregator.GetEvent<TickerSymbolSelectedEvent>().Publish("MSFT");
Subscribing to an event:
Again, use the singleton instance of EventAggregator to get an instance of the event to which you want to work with. Call the event's Subscribe method supplying a delegate to handle the receiving of the message. You can optionally provide a ThreadOption to specify which thread on which you would like to receive the message (publisher's thread, UI thread, or background thread). You can optionally provide a boolean to specify whether or not to provide a strong reference to the subscriber. This defaults to false so that the subscriber can be garbage collected even though it did not unregister to our EventAggregator event. Finally you can optionally provide a predicate delegate to filter the messages based on your custom criteria. Please note that you can only filter on properties of the message payload.
eventAggregator.GetEvent<TickerSymbolSelectedEvent>().Subscribe(ShowNews, ThreadOption.UIThread);
|
https://docs.microsoft.com/en-us/archive/blogs/francischeung/decoupled-communication-with-prism-event-aggregation
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
tag:gamedevelopment.tutsplus.com,2005:/categories/steering-behaviors Envato Tuts+ Game Development - Steering Behaviors 2015-02-18T16:01:02Z tag:gamedevelopment.tutsplus.com,2005:PostPresenter/cms-23026 Create a Hockey Game AI Using Steering Behaviors: Game Mechanics <p>In <a href="" target="_self">past posts in this series</a>, we've focused on the <em>concepts</em>.<br></p><h2>Final Result</h2><p>Below is the game that will be implemented using all the elements described in this tutorial.</p><figure><iframe src="" width="600" height="480" frameborder="0" scrolling="no"></iframe></figure><h2>Thinking Game Design</h2><p>The <a href="" rel="external">previous parts of this series</a> focused on explaining how the game AI works. Each part detailed a particular aspect of the game, like <a href="" rel="external">how athletes move</a> and how <a href="" rel="external">attack</a> and <a href="" rel="external">defense</a> are implemented. They were based on concepts like <a href="" rel="external">steering behaviors</a> and <a href="" rel="external">stack-based finite state machines</a>.</p><p>In order to make a fully playable game, however, all those aspects must be wrapped into a core <em>game mechanic</em>. The most obvious choice would be to implement all the official rules of an official hockey match, but that would require a lot of work and time. Let's take a simpler fantasy approach instead.</p><p.</p><p>In order to enhance this mechanic, we'll add a few power-ups. They will help the player to score and make the game a bit more dynamic.</p><h2>Adding the Ability to Score</h2><p>Let's begin with the scoring system, responsible for determining who wins or loses. A team scores every time the puck enters the opponent's goal.</p><p>The easiest way to implement this is by using two overlapped rectangles:</p><figure class="post_image"><img alt="Overlapped rectangles describing the goal area If the puck collides with the red rectangle the team scores" data-<figcaption>Overlapped rectangles describing the goal area. If the puck collides with the red rectangle, the team scores.</figcaption></figure><p>The green rectangle represents the area occupied by the goal structure (the frame and the net). It works like a solid block, so the puck and the athletes will not be able to move through it; they will bounce back.</p><p>The red rectangle represents the "score area". If the puck overlaps this rectangle, it means a team just scored.<br></p><p>The red rectangle is smaller than the green one, and placed in front of it, so if the puck touches the goal on any side but the front, it will bounce back and no score will be added:</p><figure class="post_image"><img alt="A few examples of how the puck would behave if it touched the rectangles while moving" data-<figcaption>A few examples of how the puck would behave if it touched the rectangles while moving.</figcaption></figure><h2>Organizing Everything After Someone Scores</h2><p>After a team scores, all athletes must return to their initial position and the puck must be placed at the rink center again. After this process, the match can continue.</p><h3>Moving Athletes To Their Initial Position</h3><p>As explained in the <a href="" rel="external">first part</a> of this series, all athletes have an AI state called <code class="inline">prepareForMatch</code> that will move them towards the initial position, and cause them to smoothly come to a stop there.</p><p>When the puck overlaps one of the "score areas", any currently active AI state of all athlete is removed and <code class="inline">prepareForMatch</code> is pushed into the brain. Wherever the athletes are, they will return to their initial position><h3>Moving the Camera Towards the Rink Center</h3><p>Since the camera always follows the puck, if it is directly teleported to the rink center after someone scores, the current view will abruptly change, which would be ugly and confusing.</p><p>A better way to do this is to move the puck smoothly towards the rink center; since the camera follows the puck, this will gracefully slide the view from the goal to the rink center. </p><p>This can be achieved by changing the puck's velocity vector after it hits any goal area. The new velocity vector must "push" the puck towards the rink center, so it can be calculated as:</p><pre class="brush: actionscript3 noskimlinks noskimwords">var c :Vector3D = getRinkCenter(); var p :Vector3D = puck.position; var v :Vector3D = c - p; v = normalize(v) * 100; puck.velocity = v;</pre><p>By subtracting the rink center's position from the puck's current position, it is possible to calculate a vector that points directly towards the rink center.</p><p>After normalizing this vector, it can be scaled by any value, like <code class="inline">100</code>, which controls how fast the puck moves towards the rink center.</p><p>Below is an image with a representation of the new velocity vector:</p><figure class="post_image"><img alt="Calculation of a new velocity vector that will move the puck towards the rink center" data-<figcaption>Calculation of a new velocity vector that will move the puck towards the rink center.</figcaption></figure><p>This vector <code class="inline">V</code> is used as the puck's velocity vector, so the puck will move towards the rink center as intended.</p><p.</p><p>In order to decide whether the puck is already in position, the distance between it and the rink center is calculated during the movement. If it is less than <code class="inline">10</code>, for instance, the puck is close enough to be directly placed at the rink center and reactivated so that the match can continue.</p><h2>Adding Power-Ups</h2><p>The idea behind power-ups is to help the player achieve the game's primary objective, which is to score by carrying the puck to the opponent's goal.</p><p>For the sake of scope, our game will have only two power-ups: <em>Ghost Help</em> and <em>Fear The Puck</em>. The former adds three additional athletes to the player's team for some time, while the latter makes the opponents flee the puck for a few seconds.</p><p>Power-ups are added to both teams when anyone scores.</p><h3>Implementing the "Ghost Help" Power-up</h3><p>Since all athletes added by the <em>Ghost Help</em> power-up are temporary, the <code class="inline">Athlete</code> class must be modified to allow an athlete to be marked as a "ghost". If an athlete is a ghost, it will remove itself from the game><p>Below is the <code class="inline">Athlete</code> class, highlighting only the additions made to accommodate the ghost functionality:</p><pre class="brush: actionscript3 noskimlinks noskimwords". } }</pre><p>The property <code class="inline">mGhost</code> is a boolean that tells if the athlete is a ghost or not, while <code class="inline">mGhostCounter</code> contains the amount of seconds the athlete should wait before removing himself from the game.</p><p>Those two properties are used by the <code class="inline">updatePowerups()</code> method:</p><pre class="brush: actionscript3 noskimlinks noskimwords"(); } } }</pre><p>The <code class="inline">updatePowerups()</code> method, called within the athlete's <code class="inline">update()</code> routine, will handle all power-up processing in the athlete. Right now all it does is check whether the current athlete is a ghost or not. If it is, then the <code class="inline">mGhostCounter</code> property is decremented by the amount of time elapsed since the last update.</p><p>When the value of <code class="inline">mGhostCounter</code> reaches zero, it means that the temporary athlete has been active for long enough, so it must remove itself from the game. To make the player aware of that, the athlete will start flickering its last two seconds before disappearing.</p><p>Finally, it is time to implement the process of adding the temporary athletes when the power-up is activated. That is performed in the <code class="inline">powerupGhostHelp()</code> method, available in the main game logic:</p><pre class="brush: actionscript3 noskimlinks noskimwords"); } }</pre><p>This method iterates over a loop that corresponds to the amount of temporary athletes being added. Each new athlete is added to the bottom of the rink and marked as a ghost. </p><p>As previously described, ghost athletes will remove themselves from the game.</p><h3>Implementing the "Fear The Puck" Power-Up</h3><figure data- <iframe src="//" frameborder="0" webkitallowfullscreen="webkitallowfullscreen" mozallowfullscreen="mozallowfullscreen" allowfullscreen="allowfullscreen"></iframe> </figure><p>The <em>Fear The Puck</em> power-up makes all opponents flee the puck for a few seconds. </p><p>Just like the <em>Ghost Help</em> power-up, the <code class="inline">Athlete</code> class must be modified to accommodate that functionality:</p><pre class="brush: actionscript3 noskimlinks noskimwords" } }</pre><p>First the <code class="inline">updatePowerups()</code> method is changed to decrement the <code class="inline">mFearCounter</code> property, which contains the amount of time the athlete should avoid the puck. The <code class="inline">mFearCounter</code> property is changed every time the method <code class="inline">fearPuck()</code> is called.</p><p>In the <code class="inline">Athlete</code>'s <code class="inline">update()</code> method, a test is added to check if the power-up should take place. If the athlete is an opponent controlled by the AI (<code class="inline">amIAnAiControlledOpponent()</code> returns <code class="inline">true</code>) and the athlete should evade the puck (<code class="inline">shouldIEvadeFromPuck()</code> returns <code class="inline">true</code> as well), the <code class="inline">evadeFromPuck()</code> method is invoked.</p><p>The <code class="inline">evadeFromPuck()</code> method uses the <a href="" rel="external">evade behavior</a>, which makes an entity avoid any object and its trajectory altogether:</p><pre class="brush: actionscript3 noskimlinks noskimwords">private function evadeFromPuck() :void { mBoid.steering = mBoid.steering + mBoid.evade(getPuck().getBoid()); }</pre><p>All the <code class="inline">evadeFromPuck()</code> method does is to add an evade force to the current athlete's steering force. It makes him evade the puck without ignoring the already added steering forces, such as the one created by the currently active AI state.</p><p>In order to be evadable, the puck must behave like a boid, as all athletes do (more information about that in the <a href="" rel="external">first part of the series</a>). As a consequence, a boid property, which contains the puck's current position and velocity, must be added to the <code class="inline">Puck</code> class:</p><pre class="brush: actionscript3 noskimlinks noskimwords">class Puck { // (...) private var mBoid :Boid; // (...) public function update() { // (...) mBoid.update(); } public function getBoid() :Boid { return mBoid; } // (...) }</pre><p>Finally, we update the main game logic to make opponents fear the puck when the power-up is activated:</p><pre class="brush: actionscript3 noskimlinks noskimwords">private function powerupFearPuck() :void { var i :uint, athletes :Array = rightTeam.members, size :uint = athletes.length; for (i = 0; i < size; i++) { if (athletes[i] != null) { // Make athlete fear the puck for 3 seconds. athletes[i].fearPuck(3); } } }</pre><p>The method iterates over all opponent athletes (the right team, in this case), calling the <code class="inline">fearkPuck()</code> method of each one of them. This will trigger the logic that makes the athletes fear the puck during a few seconds, as previously explained.</p><h2>Freezing and Shattering</h2><p>The last addition to the game is the freezing and shattering part. It is performed in the main game logic, where a routine checks whether the athletes of the left team are overlapping with the athletes of the right team.</p><p>This overlapping check is automatically performed by the <a href="" rel="external">Flixel</a> game engine, which invokes a callback every time an overlap is found:</p><pre class="brush: actionscript3 noskimlinks noskimwords"); } } }</pre><p>This callback receives as parameters the athletes of each team that overlapped. A test checks if the puck's owner is not null, which means it is being carried by someone.</p><p>In that case, the puck's owner is compared to the athletes that just overlapped. If one of them is carrying the puck (so he is the puck's owner), he is shattered and the puck's ownership passes to the other athlete.</p><p>The <code class="inline">shatter()</code> method in the <code class="inline">Athlete</code> class will mark the athlete as inactive and place it at the bottom of the rink after a few seconds. It will also emit several particles representing ice pieces, but this topic will be covered in another post.</p><h2>Conclusion</h2><p>In this tutorial, we implemented a few elements required to turn our hockey prototype into a fully playable game. I intentionally placed the focus on the concepts behind each of those elements, instead of how to actually implement them in game engine X or Y.</p><p>The freeze and shatter approach used for the game might sound too fantastical, but it helps keep the project manageable. Sports rules are very specific, and their implementation can be tricky.</p><p>By adding a few screens and some HUD elements, you can create your own full hockey game from this demo!</p><h2>References</h2><ul> <li>Rink: <a href="">Hockey Stadium</a> on GraphicRiver</li> <li>Sprites: <a href="">Hockey Players</a> by Taylor J Glidden</li> <li>Icons: <a href="" rel="external">Game-Icons</a> by Lorc</li> <li>Mouse cursor: <a href="" rel="external">Cursor</a> by Iwan Gabovitch</li> <li>Instruction keys: <a href="" rel="external">Keyboard Pack</a> by Nicolae Berbece</li> <li>Crosshair: <a href="" rel="external">Crosshairs Pack</a> by Bryan</li> <li>SFX/Music: <a href="" rel="external">shatter</a> by Michel Baradari, <a href="" rel="external">puck hit</a> and <a href="" rel="external">cheer</a> by gr8sfx, <a href="" rel="external">music</a> by DanoSongs.com</li> </ul> 2015-02-18T16:01:02.674Z 2015-02-18T16:01:02.674Z Fernando Bevilacqua tag:gamedevelopment.tutsplus.com,2005:PostPresenter/cms-20975 Create a Hockey Game AI Using Steering Behaviors: Defense <p>This tutorial is the final part in the process of <a href="" target="_self">coding a hockey game using steering behaviors and finite state machines</a>. Here, we will improve our athletes' artificial intelligence to allow them to defend their goal against their opponents. We'll also make our athletes perform some attack tactics while they are defending, so they can recover the puck and terminate the opponent's offensive.<br></p><h2>A Few Words About Defending</h2><p>In a competitive game like hockey, the defense process is much more than just rushing to the team's goal area to prevent the opponent from scoring. Prevent the opponent from scoring is just one of the many tasks involved.</p><p>If a team focuses on score prevention tactics alone, all athletes will become merely obstacles along the way. The opponent will keep pushing, trying to find a spot in the defense formation. It will take time, but eventually the opponent will score.</p><p>The defense process is a mixture of defensive and offensive actions. The best way to terminate the opponent's attack, which is the defense objective, is to <i>attack while defending</i>. It might sound a bit confusing, but it makes perfect sense.</p><p.</p><h2>Combining Attack and Defense</h2><p>In order to achieve a defensive behavior that has some attack aspects in it, we'll add two new states to the AI finite-state machine:</p><figure class="post_image"><img alt="" data-<figcaption>A stack-based finite state machine representing the attack and the defense processes..</figcaption></figure><p>The <code class="inline">defend</code> state will be the foundational stone in the defense process. While in that state, athletes will move towards their side of the rink, always trying to recover the puck to terminate the opponent's offensive. </p><p>The <code class="inline">patrol</code> state will complement the defense process. It will prevent athletes from standing still when they reach their defense position in the rink. This state will keep athletes moving and patrolling the area, which will produce a more convincing result.</p><h2>Understanding the Defend State</h2><p>The <code class="inline">defend</code> state is based on a very simple idea. When it is active, each athlete will move towards their initial position in the rink. We already used this position, described by the <code class="inline">mInitialPosition</code> property in the <code class="inline">Athlete</code> class, to implement the <code class="inline">prepareForMatch</code> state in <a href="" rel="external" target="_blank">the first tutorial in this series</a>.</p><p>While moving towards his initial position, an athlete will try to perform some attack actions against the opponent if he is close enough and is carrying the puck. For instance, if the athlete is moving and the opponent's leader (the one with the puck) becomes a neighbor, the <code class="inline">defend</code> state will be replaced with something more appropriate, such as the <code class="inline">stealPuck</code> state.</p><p>Since athletes tend to be spread through the whole rink while attacking, when they switch to <code class="inline">defend</code> and start returning to their initial position, they will cover a significant area, ensuring a convincing defense pattern:</p><figure class="post_image"><img alt="" data-<figcaption>Athletes performing attack actions while returning to their initial defense positions.</figcaption></figure><p>Some athletes will not encounter opponents along the way, so they will just move towards their initial position. Other athletes, however, might get close to some interesting opponents, such as the leader (the one carrying the puck).</p><h2>Implementing the Defend State</h2><p>The <code class="inline">defend</code> state will have four transitions:</p><figure class="post_image"><img alt="" data-<figcaption>The defend state and its transitions in the FSM describing the defense process..</figcaption></figure><p>Three of them, <code class="inline">team has the puck</code>, <code class="inline">close to opponent leader</code>, and <code class="inline">puck has no owner</code>, are related to attack actions. They will be responsible for making athletes look like they are attacking opponents while moving to defend the team's goal. The <code class="inline">in position</code> transition will be triggered when the athlete finally arrives at his initial position in the rink.</p><p>The first step in implementing the <code class="inline">defend</code> state is to make the athlete move towards his initial position. Since he must slow down as he gets closer to the destination, the <a href="" rel="external" target="_blank">arrive steering behavior</a> is a perfect fit:</p><pre class="brush: actionscript3 noskimlinks noskimwords"); } } // (...) }</pre><p>The arrive behavior will create a force that will push the athlete towards his initial position (<code class="inline">mInitialPosition</code>) while the <code class="inline">defend</code> state is active. After the arrive force calculation, in this code, we run a sequence of tests that will check the puck's ownership and the proximity of opponents, popping the <code class="inline">defend</code> state from the brain and pushing a new one according to the situation.</p><p>If the puck has no owner, it is probably moving freely in the rink. In that case, the <code class="inline">pursuePuck</code> state will be pushed into the brain (line <b>29</b>). If the puck has a team owner, it means the defense process is over and it is time to attack (line <b>16</b>). Finally if the puck's owner belongs to the opponent team and he is close enough, <code class="inline">stealPuck</code> will be pushed into the brain (line <b>22</b>).</p><p>The result is a team that is able to defend their goal, pursuing and trying to steal the puck from the opponent carrying it. Below is a demonstration of the current defend implementation:</p><figure><iframe src="" width="600" height="480" frameborder="0" scrolling="no"></iframe></figure><h2>Patrolling the Area</h2><p>The current defense behavior is acceptable, but it can be tweaked a little bit to be more convincing. If you analyse the previous demo, you may eventually notice that athletes will stop and stand still after they reach their initial position while defending. </p><p>If an athlete returns to his initial position without encountering any opponents along the way, he will remain still until an opponent with the puck passes by or the team recovers the puck. </p><p>We can improve this behavior by adding a <code class="inline">patrol</code> state, which gets pushed into the brain by the <code class="inline">defend</code> state when the athlete reaches his initial position:</p><figure class="post_image"><img alt="" data-<figcaption>The patrol state and its transitions in the FSM describing the defense process.</figcaption></figure><p>The <code class="inline">patrol</code> state is extremely simple. When active, it will make athletes move around randomly for a short time, which visually mimics the expected behavior from an athlete trying to defend a spot in the rink.</p><p>When the distance between the athlete and his initial position is grater than <code class="inline">10</code>, for instance, <code class="inline">patrol</code> pops itself from the brain and pushes <code class="inline">defend</code>. If the athlete arrives at his initial position again while defending, <code class="inline">patrol</code> is pushed once more into the brain and the process repeats:</p><figure class="post_image"><img alt="" data-<figcaption>Demonstration of the patrol state.</figcaption></figure><p>The random movement pattern required by the <code class="inline">patrol</code> state can be easily achieved with the <a href="" rel="external" target="_blank">wander steering behavior</a>. The implementation of the <code class="inline">patrol</code> state is:</p><pre class="brush: actionscript3 noskimlinks noskimwords"); } } // (...) }</pre><p>The distance check (line <b>8</b>) ensures that the athlete will patrol a small area around his initial position instead of leaving his initial defense position completely unattended.</p><p>The results of using the <code class="inline">patrol</code> state is a more convincing behavior:</p><figure><iframe src="" width="600" height="480" frameborder="0" scrolling="no"></iframe></figure><h2>Putting It All Together</h2><p>During the implementation of the <code class="inline">stealPuck</code> state in the <a href="" rel="external" target="_blank">previous tutorial</a>, there was a situation where athletes should switch to the <code class="inline">defend</code> state. However that state was not implemented back then.</p><p>While trying to steal the puck (the <code class="inline">stealPuck</code> state), if the opponent is too far away from the athlete, it's pointless to keep trying to steal the puck. The best option in that situation is to pop the <code class="inline">stealPuck</code> state and push <code class="inline">defend</code>, hoping that a teammate will be closer to the opponent's leader to steal the puck.</p><p>The <code class="inline">stealPuck</code> state must be changed (lines <b>28</b> and <b>29</b>) to allow athletes to push the <code class="inline">defend</code> state in that situation:</p><pre class="brush: actionscript3 noskimlinks noskimwords">class Athlete { // (...) private function stealPuck() :void { // Does the puck has any owner? if (getPuckOwner() != null) { // Yeah, it has, but who has it? if (doesMyTeamHasThe (Utils.distance(aOpponentLeader, this) < 150) { // Yeah, he is close!(50); } else { // No, he is too far away. Let's switch to 'defend' and hope // someone closer to the puck can steal it for us. mBrain.popState();>After updating the <code class="inline">stealPuck</code> state, athletes are now able to organize attack and defense tactics, making two AI-controlled teams able to play against each other.</p><p>The result is demonstrated below:</p><figure><iframe src="" width="600" height="480" frameborder="0" scrolling="no"></iframe></figure><h2>Conclusion</h2><p>In this tutorial, we implemented a defense tactic used by athletes to defend their goal from opponents. We then improved the <code class="inline">defend</code> state by adding some attack actions, such as to attempt to steal the opponent's puck, which made the defense tactic feel more natural and convincing.</p><p>We also improved the feel of the defense behavior by adding an extremely simple, yet powerful state, the <code class="inline">patrol</code>. The idea is to prevent athletes from standing still while defending their team's goal.</p><p>And with that, we've created a full AI system for our hockey game! </p><h2>References</h2><ul> <li>Sprite: <a href="">Hockey Stadium</a> on GraphicRiver</li> <li>Sprites: <a href="">Hockey Players</a> by Taylor J Glidden</li> </ul> 2014-08-29T13:30:46.000Z 2014-08-29T13:30:46.000Z Fernando Bevilacqua tag:gamedevelopment.tutsplus.com,2005:PostPresenter/cms-20974 Create a Hockey Game AI Using Steering Behaviors: Attack <p>In this tutorial, we continue <a href="" target="_self">coding artificial intelligence for a hockey game using steering behaviors and finite state machines</a>. In this part of the series, you will learn about the AI required by game entities to coordinate an attack, which involves intercepting and carrying the puck to the opponent's goal.<br></p><h2>A Few Words About Attacking</h2><p>Coordinating and performing an attack in a cooperative sport game is a very complex task. In the real world, when humans play a hockey game, they make <i>several</i> decisions based on many variables.<br></p><p>Those decisions involve calculations and understanding what is going on. A human can tell why an opponent is moving based on the actions of another opponent, for instance, "he is moving to be in a better strategic position." It's not trivial to port that understanding to a computer.</p><p>As a consequence, if we try to code the AI to follow all the human nuances and perceptions, the result will be a huge and scary pile of code. Additionally, the result might not be precise or easily modifiable.</p><p>That's the reason why our attack AI will try to mimic the <i>result </i>of a group of humans playing, not the human perception itself. That approach will lead to approximations, but the code will be easier to understand and tweak. The outcome is good enough for several use cases.</p><h2>Organizing the Attack With States</h2><p>We'll break the attack process down into smaller pieces, with each one performing a very specific action. Those pieces are the states of a <a href="" target="_self">stack-based finite state machine</a>. As <a href="" rel="external" target="_blank">previously explained</a>, each state will produce a steering force that will make the athlete behave accordingly.</p><p>The orchestration of those states and the conditions to switch among them will define the attack. The image below presents the complete FSM used in the process:</p><figure class="post_image"><img alt="" data-<figcaption>A stack-based finite state machine representing the attack process.</figcaption></figure><p>As illustrated by the image, the conditions to switch among the states will be solely based on the puck's distance and ownership. For instance, <code class="inline">team has the puck</code><i> </i>or <code class="inline">puck is too far away</code><i>.</i><br></p><p>The attack process will be composed of four states: <code class="inline">idle</code>, <code class="inline">attack</code>, <code class="inline">stealPuck</code>, and <code class="inline">pursuePuck</code>. The <code class="inline">idle</code> state was already implemented in the <a href="" rel="external" target="_blank">previous tutorial</a>, and it is the starting point of the process. From there, an athlete will switch to <code class="inline">attack</code> if the team has the puck, to <code class="inline">stealPuck</code> if the opponent's team has the puck, or to <code class="inline">pursuePuck</code> if the puck has no owner and it is close enough to be collected.</p><p>The <code class="inline">attack</code> state represents an offensive movement. While in that state, the athlete carrying the puck (named <code class="inline">leader</code>) will try to reach the opponent's goal. Teammates will move along, trying to support the action.</p><p>The <code class="inline">stealPuck</code> state represents something between a defensive and an offensive movement. While in that state, an athlete will focus on pursuing the opponent carrying the puck. The objective is to recover the puck, so the team can start attacking again.</p><p>Finally, the <code class="inline">pursuePuck</code> state is not related to attack or defense; it will just guide the athletes when the puck has no owner. While in that state, an athlete will try to get the puck that is freely moving on the rink (for instance, after being hit by someone's stick).</p><h2>Updating the Idle State</h2><p>The <code class="inline">idle</code> state that was previously implemented had no transitions. Since this state is the starting point for the whole AI, let's update it and make it able to switch to other states.</p><p>The <code class="inline">idle</code> state has three transitions:</p><figure class="post_image"><img alt="" data-<figcaption> The idle state and its transitions in the FSM describing the attack process.</figcaption></figure><p>If the athlete's team has the puck, <code class="inline">idle</code> should be popped from the brain and <code class="inline">attack</code> should be pushed. Similarly, if the opponent's team has the puck, <code class="inline">idle</code> should be replaced by <code class="inline">stealPuck</code>. The remaining transition happens when nobody owns the puck and it is close to the athlete; in that case, <code class="inline">pursuePuck</code> should be pushed into the brain.</p><p>The updated version of <code class="inline">idle</code> is as follows (all other states will be implemented later):</p><pre class="brush: actionscript3 noskimlinks noskimwords">class Athlete { // (...) private function idle() :void { var aPuck :Puck = getPuck(); stopAndlookAt(aPuck); // This is a hack to help test the AI. if (mStandStill) return; // Does the puck has an owner? if (getPuckOwner() != null) { // Yeah, it has. mBrain.popState(); if (doesMyTeamHaveThePuck()) { // My team just got the puck, it's attack time! mBrain.pushState(attack); } else { // The opponent team got the puck, let's try to steal it. mBrain.pushState(stealPuck); } } else if (distance(this, aPuck) < 150) { // The puck has no owner and it is nearby. Let's pursue it. mBrain.popState(); mBrain.pushState(pursuePuck); } } private function attack() :void { } private function stealPuck() :void { } private function pursuePuck() :void { } }</pre><p>Let's proceed with the implementation of the other states.</p><h2>Pursuing the Puck</h2><p>Now that the athlete has gained some perception about the environment and is able to switch from <code class="inline">idle</code> to any state, let's focus on pursuing the puck when it has no owner.</p><p>An athlete will switch to <code class="inline">pursuePuck</code> immediately after the match begins, because the puck will be placed at the center of the rink with no owner. The <code class="inline">pursuePuck</code> state has three transitions:</p><figure class="post_image"><img alt="" data-<figcaption>The pursuePuck state and its transitions in the FSM describing the attack process.</figcaption></figure><p>The first transition is <code class="inline">puck is too far away</code>, and it tries to simulate what happens in a real game regarding chasing the puck. For strategic reasons, usually the athlete closest to the puck is the one that tries to catch it, while the others wait or try to help.</p><p>Without switching to <code class="inline">idle</code> when the puck is distant, every AI-controlled athlete would pursue the puck at the same time, even if they are away from it. By checking the distance between the athlete and the puck, <code class="inline">pursuePuck</code> pops itself from the brain and pushes <code class="inline">idle</code> when the puck is too distant, which means the athlete just "gave up" pursuing the puck:</p><pre class="brush: actionscript3 noskimlinks noskimwords">class Athlete { // (...) private function pursuePuck() :void { var aPuck :Puck = getPuck();. } } // (...) }</pre><p>When the puck is close, the athlete must go after it, which can be easily achieved with the <a href="" rel="external" target="_blank">seek behavior</a>. Using the puck's position as the seek destination, the athlete will gracefully pursue the puck and adjust his trajectory as the puck moves:</p><pre class="brush: noskimlinks noskimwords">class Athlete { // (...) private function pursuePuck() :void { var aPuck :Puck = getPuck(); mBoid.steering = mBoid.steering + mBoid.separation();. if (aPuck.owner == null) { // Nobody has the puck, it's our chance to seek and get it! mBoid.steering = mBoid.steering + mBoid.seek(aPuck.position); } else { // Someone just got the puck. If the new puck owner belongs to my team, // we should switch to 'attack', otherwise I should switch to 'stealPuck' // and try to get the puck back. mBrain.popState(); mBrain.pushState(doesMyTeamHaveThePuck() ? attack : stealPuck); } } } }</pre><p>The remaining two transitions in the <code class="inline">pursuePuck</code> state, <code class="inline">team has the puck</code> and <code class="inline">opponent has the puck</code>, are related to the puck being caught during the pursue process. If somebody catches the puck, the athlete must pop the <code class="inline">pursuePuck</code> state and push a new one into the brain. </p><p>The state to be pushed depends on the puck's ownership. If the call to <code class="inline">doesMyTeamHaveThePuck()</code> returns <code class="inline">true</code>, it means that a teammate got the puck, so the athlete must push <code class="inline">attack</code>, which means it's time to stop pursuing the puck and start moving towards the opponent's goal. If an opponent got the puck, the athlete must push <code class="inline">stealPuck</code>, which will make the team try to recover the puck.</p><p>As a small enhancement, athletes should not remain too close from each other during the <code class="inline">pursuePuck</code> state, because a "crowded" pursuing movement is unnatural. Adding <a href="" rel="external" target="_blank">separation</a> to the state's steering force (line <code class="inline">6</code> in the code above) ensures athletes will keep a minimum distance among them.</p><p>The result is a team that's able to pursue the puck. For the sake of testing, in this demo, the puck is placed at the center of the rink every few seconds, to make the athletes move continually:</p><figure><iframe src="" width="600" height="480" frameborder="0" scrolling="no"></iframe></figure><h2>Attacking With the Puck</h2><p>After obtaining the puck, an athlete and his team must move towards the opponent's goal to score. That's the purpose of the <code class="inline">attack</code> state:</p><figure class="post_image"><img alt="" data-<figcaption>The attack state and its transitions in the FSM describing the attack process.</figcaption></figure><p>The <code class="inline">attack</code> state has only two transitions: <code class="inline">opponent has the puck</code> and <code class="inline">puck has no owner</code>. Since the state is solely designed to make athletes move towards the opponent's goal, there is no point to remain attacking if the puck is not under the team's possession any more.</p><p>Regarding the movement towards the opponent's goal: the athlete carrying the puck (leader) and the teammates helping him should behave differently. The leader must reach the opponent's goal, and the teammates should help him along the way.</p><p>This can be implemented by checking whether the athlete running the code has the puck:<. Let's just follow him // to give some support during the attack. mBoid.steering = mBoid.steering + mBoid.followLeader(aPuckOwner.bo <code class="inline">amIThePuckOwner()</code> returns <code class="inline">true</code> (line 10), the athlete running the code has the puck. In that case, he will just <a href="" rel="external" target="_blank">seek</a> the opponent's goal position. That's pretty much the same logic used to pursue the puck in the <code class="inline">pursuePuck</code> state.</p><p>If <code class="inline">amIThePuckOwner()</code> returns <code class="inline">false</code>, the athlete doesn't have the puck, so he must help the leader. Helping the leader is a complicated task, so we will simplify it. An athlete will assist the leader just by seeking a position ahead of him:</p><figure class="post_image"><img alt="" data-<figcaption>Teammates assisting the leader.</figcaption></figure><p>As the leader moves, he will be surrounded by teammates as they follow the <code class="inline">ahead</code> point. This gives the leader some options to pass the puck to if there's any of trouble. As in a real game, the surrounding teammates should also stay out of the leader's way.</p><p>This assistance pattern can be achieved by adding a slightly modified version of the <a href="" rel="external" target="_blank">leader following</a> behavior (line 18). The only difference is that athletes will follow a point <i>ahead</i> of the leader, instead of one behind him as was originally implemented in that behavior.</p><p>Athletes assisting the leader should also keep a minimum distance among each other. That's implemented by adding a separation force (line 19).</p><p>The result is a team able to move towards the opponent's goal, without crowding and while simulating an assisted attack movement:</p><figure><iframe src="" width="600" height="480" frameborder="0" scrolling="no"></iframe></figure><h3>Improving the Attack Support</h3><p>The current implementation of the <code class="inline">attack</code> state is good enough for some situations, but it has a flaw. When someone catches the puck, he becomes the leader and is immediately followed by teammates.</p><p>What happens if the leader is moving towards his own goal when he catches the puck? Take a closer look at the demo above and notice the unnatural pattern when teammates start following the leader.</p><p>When the leader catches the puck, the seek behavior takes some time to correct the leader's trajectory and effectively make him move towards the opponent's goal. Even when the leader is "maneuvering", teammates will try to seek his <code class="inline">ahead</code> point, which means they will move towards <i>their own</i> goal (or the place that the leader is staring at).</p><p>When the leader is finally in position and ready to move towards the opponent's goal, teammates will be "maneuvering" to follow the leader. The leader will then move without teammate support for as long as the others are adjusting their trajectories.</p><p>This flaw can be fixed by checking whether the teammate is ahead of the leader when the team recovers the puck. Here, the condition "ahead" means "closer to the opponent's goal":</p><pre class="brush: actionscript3 noskimlinks noskimwords">class Athlete { // (...) private function isAheadOfMe(theBoid :Boid) :Boolean { var aTargetDistance :Number = distance(getOpponentGoalPosition(), theBoid); var aMyDistance :Number = distance(getOpponentGoalPosition(), mBoid.position); return aTargetDistance <= aMyDistance; } the leader (who is the puck owner) is ahead of the athlete running the code, then the athlete should follow the leader just like he was doing before (lines 27 and 28). If the leader is behind him, the athlete should hold his current position, keeping a minimum distance between the others (line 33).</p><p>The result is a bit more convincing than the initial <code class="inline">attack</code> implementation:</p><figure><iframe src="" width="600" height="480" frameborder="0" scrolling="no"></iframe></figure><p><strong>Tip:</strong> By tweaking the distance calculations and comparisons in the <code class="inline">isAheadOfMe()</code> method, it's possible to modify the way athletes hold their current positions.<br></p><h2>Stealing the Puck</h2><p>The final state in the attacking process is <code class="inline">stealPuck</code>, which becomes active when the opposing team has the puck. The main purpose of the <code class="inline">stealPuck</code> state is to steal the puck from the opponent carrying it, so that the team can start attacking again:</p><figure class="post_image"><img alt="" data-<figcaption> The stealPuck state and its transitions in the FSM describing the attack process.</figcaption></figure><p>Since the idea behind this state is to steal the puck from the opponent, if the puck is recovered by the team or it becomes free (that is, it has no owner), <code class="inline">stealPuck</code> will pop itself from the brain and push the right state to deal with the new situation:<>If the puck has an owner and he belongs to the opponent's team, the athlete must pursue the opposing leader and try to steal the puck. In order to pursue the opponent's leader, an athlete must <i>predict </i>where he will be in the near future, so he can be intercepted in his trajectory. That's different from just seeking the opposing leader.</p><p>Fortunately, this can be easily achieved with the <a href="" rel="external" target="_blank">pursue behavior</a> (line 19). By using a pursuit force in the <code class="inline">stealPuck</code> state, athletes will try to <i>intercept</i> the opponent's leader, instead of just following him:</p><figure><iframe src="" width="600" height="480" frameborder="0" scrolling="no"></iframe></figure><h3>Preventing a Crowded Steal Movement</h3><p>The current implementation of <code class="inline">stealPuck</code> works, but in a real game only one or two athletes approach the opponent leader to steal the puck. The rest of the team remains in the surrounding areas trying to help, which prevents a crowded stealing pattern.</p><p>It can be fixed by adding a distance check (line 17) before the opponent's leader pursuit:< (distance(aOpponentLeader, this) < 150) { // Yeah, he is close! Let's pursue him while mantaining a certain // separation from the others to avoid that everybody will ocuppy the same // position in the pursuit. mBoid.steering = mBoid.steering.add(mBoid.pursuit(aOpponentLeader.boid)); mBoid.steering = mBoid.steering.add(mBoid.separation(50)); } else { // No, he is too far away. In the future, we will switch // to 'defend' and hope someone closer to the puck can // steal it for us. // TODO: mBrain.popState(); // TODO:>Instead of blindly pursuing the opponent's leader, an athlete will check whether the distance between him and the opponent leader is less than, say, <code class="inline">150</code>. If that's <code class="inline">true</code>, the pursuit happens normally, but if the distance is greater than <code class="inline">150</code>, it means the athlete is too far from the opponent leader.</p><p>If that happens, there is no point in continuing trying to steal the puck, since it is too far away and there are probably teammates already in place trying to do the same. The best option is to pop <code class="inline">stealPuck</code> from the brain and push the <code class="inline">defense</code> state (which will be explained in the next tutorial). For now, an athlete will just hold his current position if the opponent leader is too far away.</p><p>The result is a more convincing and natural stealing pattern (no crowding):</p><figure><iframe src="" width="600" height="480" frameborder="0" scrolling="no"></iframe></figure><h2>Avoiding Opponents While Attacking</h2><p>There is one last trick that the athletes must learn in order to attack effectively. Right now, they move towards the opponent's goal without considering the opponents along the way. An opponent must be seen as a threat, and should be avoided.</p><p>Using the <a href="" rel="external" target="_blank">collision avoidance</a> behavior, athletes can dodge opponents while they move:</p><figure class="post_image"><img alt="" data-<figcaption>Collision avoidance behavior used to avoid opponents.</figcaption></figure><p>Opponents will be seen as circular obstacles. As a result of the dynamic nature of steering behaviors, which are updated in every game loop, the avoidance pattern will gracefully and smoothly work for moving obstacles (which is the case here).</p><p>In order to make athletes avoid opponents (obstacles), a single line must be added to the attack state (line 14):<, avoding any opponents along the way. mBoid.steering = mBoid.steering + mBoid.seek(getOpponentGoalPosition()); mBoid.steering = mBoid.steering + mBoid.collisionAvoidance(getOpponentTeam().members); }>This line will add a collision avoidance force to the athlete, which will be combined with the forces that already exist. As a result, the athlete will avoid obstacles at the same time as seeking the opponent's goal.</p><p>Below is a demonstration of an athlete running the <code class="inline">attack</code> state. Opponents are immovable to highlight the collision avoidance behavior:</p><figure><iframe src="" width="600" height="480" frameborder="0" scrolling="no"></iframe></figure><h2>Conclusion</h2><p>This tutorial explained the implementation of the attack pattern used by the athletes to steal and carry the puck towards the opponent's goal. Using a combination of steering behaviors, athletes are now able to perform complex movement patterns, such as following a leader or pursuing the opponent with the puck.</p><p>As previously discussed, the attack implementation aims to simulate what humans <em>do</em>, so the result is an approximation of a real game. By individually tweaking the states that compose the attack, you can produce a better simulation, or one that fits your needs.</p><p>In the next tutorial, you will learn how to make athletes defend. The AI will become feature-complete, able to attack and defend, resulting in a match with 100% AI-controlled teams playing against each other.</p> <h2>References</h2><ul> <li>Sprite: <a href="">Hockey Stadium</a> on GraphicRiver</li> <li>Sprites: <a href="">Hockey Players</a> by Taylor J Glidden</li> </ul> 2014-06-09T19:33:59.170Z 2014-06-09T19:33:59.170Z Fernando Bevilacqua tag:gamedevelopment.tutsplus.com,2005:PostPresenter/cms-20971 Create a Hockey Game AI Using Steering Behaviors: Foundation <p>There are different ways to make any particular game. Usually, a developer chooses something that fits his skills, using the techniques he already knows to produce the best result possible. Sometimes, people don't yet know that they need a certain technique—perhaps even an easier and better one—simply because they already know a way to create that game. </p><p>In this series of tutorials, you will learn how to create artificial intelligence for a hockey game using a combination of techniques, such as <a href="" target="_self">steering behaviors</a>, that I've previously explained as concepts.</p><p><em><strong>Note:</strong> Although this tutorial is written using AS3 and Flash, you should be able to use the same techniques and concepts in almost any game development environment.</em> </p><hr> <h2>Introduction</h2><p>Hockey is a fun and popular sport and, as a video game, it incorporates many gamedev topics, such as movement patterns, teamwork (attack, defense), artificial intelligence, and tactics. A playable hockey game is a great fit to demonstrate the combination of some useful techniques.</p><p>To simulate the hockey mechanic, with athletes running and moving around, is a challenge. If the movement patterns are pre-defined, even with different paths, the game becomes predictable (and boring). How can we implement such a dynamic environment while still maintaining control over what is going on? The answer is: using <a href="" rel="external" target="_blank">steering behaviors</a>.</p><p>Steering behaviors aim to create realistic movement patterns with improvisational navigation. They are based on simple forces that are combined every game update, so they are extremely dynamic by nature. This makes them the perfect choice for implementing something as complex and dynamic as a hockey or a soccer game.</p><h2>Scoping the Work</h2><p>For the sake of time and teaching, let's reduce the scope of the game a bit. Our hockey game will follow just a small set of the sport's original rules: in our game there will be no penalties and no goal keepers, so every athlete can move around the rink:</p><figure class="post_image"><img alt="" data-<figcaption>Hockey game using simplified rules.</figcaption></figure><p>Each goal will be replaced by a small "wall" with no net. In order to score, a team must move the puck (the disk) to make it touch any side of the opponent's goal. When someone scores, both teams will re-organize, and the puck will be placed at the center; the match will restart a few seconds after that.</p><p>Regarding the puck handling: if an athlete, say A, has the puck, and is touched by an opponent, say B, then B gains the puck and A becomes immovable for a few seconds. If the puck ever leaves the rink, it will be placed at the rink center immediately.</p><p>I will use the <a href="" rel="external" target="_blank">Flixel</a> game engine to take care of the graphical part of the code. However, the engine code will be simplified or omitted in the examples, to keep the focus on the game itself.</p><h2>Structuring the Environment</h2><p>Let's begin with the game environment, which is composed of a rink, a number of athletes, and two goals. The rink is made of four rectangles placed around the ice area; these rectangles will collide with everything that touches them, so nothing will leave the ice area.</p><p>An athlete will be described by the <code class="inline">Athlete</code> class:</p><pre class="brush: actionscript3 noskimlinks noskimwords">public class Athlete { private var mBoid :Boid; // controls the steering behavior stuff private var mId :int; // a unique identifier for the athelete public function Athlete(thePosX :Number, thePosY :Number, theTotalMass :Number) { mBoid = new Boid(thePosX, thePosY, theTotalMass); } public function update():void { // Clear all steering forces mBoid.steering = null; // Wander around wanderInTheRink(); // Update all steering stuff mBoid.update(); } private function wanderInTheRink() :void { var aRinkCenter :Vector3D = getRinkCenter(); // If the distance from the center is greater than 80, // move back to the center, otherwise keep wandering. if (Utils.distance(this, aRinkCenter) >= 80) { mBoid.steering = mBoid.steering + mBoid.seek(aRinkCenter); } else { mBoid.steering = mBoid.steering + mBoid.wander(); } } }</pre><p>The property <code class="inline">mBoid</code> is an instance of the <code class="inline">Boid</code> class, an encapsulation of the math logic used in the <a href="" rel="external" target="_blank">steering behaviors series</a>. The <code class="inline">mBoid</code> instance has, among other elements, math vectors describing the current direction, steering force, and position of the entity.</p><p>The <code class="inline">update()</code> method in the <code class="inline">Athlete</code> class will be invoked every time the game updates. For now, it only clears any active steering force, adds a <a href="" rel="external" target="_blank">wander</a> force, and finally calls <code class="inline">mBoid.update()</code>. The former command updates all the steering behavior logic encapsulated within <code class="inline">mBoid</code>, making the athlete move (using <a href="" target="_blank">Euler integration</a>).</p><p>The game class, which is responsible for the game loop, will be called <code class="inline">PlayState</code>. It has the rink, two groups of athletes (one group for each team) and two goals:</p><pre class="brush: actionscript3 noskimlinks noskimwords">public class PlayState { private var mAthletes :FlxGroup; private var mRightGoal :Goal; private var mLeftGoal :Goal; public function create():void { // Here everything is created and added to the screen. } override public function update():void { // Make the rink collide with athletes collide(mRink, mAthletes); // Ensure all athletes will remain inside the rink. applyRinkContraints(); } private function applyRinkContraints() :void { // check if athletes are within the rink // boundaries. } }</pre><p>Assuming that a single athlete was added to the match, below is the result of everything so far:</p><figure><iframe src="" width="600" height="480" frameborder="0" scrolling="no"></iframe></figure><h2>Following the Mouse Cursor </h2><p>The athlete must follow the mouse cursor, so the player can actually control something. Since the mouse cursor has a position on the screen, it can be used as the destination for the <a href="" rel="external" target="_blank">arrival behavior</a>.</p><p>The arrival behavior will make an athlete <a href="" rel="external" target="_blank">seek</a> the cursor position, smoothly slow down the velocity as it approaches the cursor, and eventually stop there. </p><p>In the <code class="inline">Athlete</code> class, let's replace the wandering method with the arrival behavior:</p><pre class="brush: actionscript3 noskimlinks noskimwords">public class Athlete { // (...) public function update():void { // Clear all steering forces mBoid.steering = null; // The athlete is controlled by the player, // so just follow the mouse cursor. followMouseCursor(); // Update all steering stuff mBoid.update(); } private function followMouseCursor() :void { var aMouse :Vector3D = getMouseCursorPosition(); mBoid.steering = mBoid.steering + mBoid.arrive(aMouse, 50); } }</pre><p>The result is an athlete that can the mouse cursor. Since the movement logic is based on steering behaviors, the athletes navigate the rink in a convincing and smooth way. </p><p>Use the mouse cursor to guide the athlete in the demo below:</p><figure><iframe src="" width="600" height="480" frameborder="0" scrolling="no"></iframe></figure><h2>Adding and Controlling the Puck</h2><p>The puck will be represented by the class <code class="inline">Puck</code>. The most important parts are the <code class="inline">update()</code> method and the <code class="inline">mOwner</code> property:</p><pre class="brush: actionscript3 noskimlinks noskimwords">public class Puck { public var velocity :Vector3D; public var position :Vector3D; private var mOwner :Athlete; // the athlete currently carrying the puck. public function setOwner(theOwner :Athlete) :void { if (mOwner != theOwner) { mOwner = theOwner; velocity = null; } } public function update():void { } public function get owner() :Athlete { return mOwner; } }</pre><p>Following the same logic of the athlete, the puck's <code class="inline">update()</code> method will be invoked every time the game updates. The <code class="inline">mOwner</code> property determines whether the puck is in possession of any athlete. If <code class="inline">mOwner</code> is <code class="inline">null</code>, it means the puck is "free", and it will move around, eventually bouncing off the rink walks.</p><p>If <code class="inline">mOwner</code> is not <code class="inline">null</code>, it means that the puck is being carried by an athlete. In this case, it will ignore any collision checks and will be forcefully placed ahead of the athlete. This can be achieved using the athlete's <code class="inline">velocity</code> vector, which also matches the athlete's direction:</p><figure class="post_image"><img alt="" data-<figcaption>Explanation of how the puck is placed ahead of the athlete.</figcaption></figure><p>The <code class="inline">ahead</code> vector is a copy of the athlete's <code class="inline">velocity</code> vector, so they point in the same direction. After <code class="inline">ahead</code> is normalized, it can be scaled by any value—say, <code class="inline">30</code>—to control how far the puck will be placed ahead of the athlete.</p><p>Finally, the puck's <code class="inline">position</code> receives the athlete's <code class="inline">position</code> added to <code class="inline">ahead</code>, placing the puck at the desired position. </p><p>Below is the code for all that:</p><pre class="brush: actionscript3 noskimlinks noskimwords">public class Puck { // (...) private function placeAheadOfOwner() :void { var ahead :Vector3D = mOwner.boid.velocity.clone(); ahead = normalize(ahead) * 30; position = mOwner.boid.position + ahead; } override public function update():void { if (mOwner != null) { placeAheadOfOwner(); } } // (...) }</pre><p>In the <code class="inline">PlayState</code> class, there is a collision test to check whether the puck overlaps any athlete. If it does, the athlete that just touched the puck becomes its new owner. The result is a puck that "sticks" to the athlete. In the below demo, guide the athlete to touch the puck at the center of the rink to see this in action:</p><figure><iframe src="" width="600" height="480" frameborder="0" scrolling="no"></iframe></figure><hr><h2>Hitting the Puck</h2><p>It's time to make the puck move as a result of being hit by the stick. Regardless of the athlete carrying the puck, all that is required to simulate a hit by the stick is to calculate a new velocity vector. That new velocity will move the puck towards the desired destination.</p><p>A velocity vector can be generated by one position vector from another; the newly generated vector will then go from one position to another. That's exactly what is needed to calculate the puck's new velocity vector after a hit:</p><figure class="post_image"><img alt="" data-<figcaption>Calculation of puck's new velocity after a hit from the stick.</figcaption></figure><p>In the image above, the destination point is the mouse cursor. The puck's current position can be used as the starting point, while the point where the puck should be after it has been hit by the stick can be used as the ending point. </p><p>The pseudo-code below shows the implementation of <code class="inline">goFromStickHit()</code>, a method in the <code class="inline">Puck</code> class that implements the logic illustrated in the image above:</p><pre class="brush: actionscript3 noskimlinks noskimwords">public class Puck { // (...) public function goFromStickHit(theAthlete :Athlete, theDestination :Vector3D, theSpeed :Number = 160) :void { // Place the puck ahead of the owner to prevent unexpected trajectories // (e.g. puck colliding the athlete that just hit it) placeAheadOfOwner(); // Mark the puck as free (no owner) setOwner(null); // Calculate the puck's new velocity var new_velocity :Vector3D = theDestination - position; velocity = normalize(new_velocity) * theSpeed; } }</pre><p>The <code class="inline">new_velocity</code> vector goes from the puck's current position to the target (<code class="inline">theDestination</code>). After that, it is normalized and scaled by <code class="inline">theSpeed</code>, which defines the magnitude (length) of <code class="inline">new_velocity</code>. That operation, in other words, defines how fast the puck will move from its current position to the destination. Finally, the puck's <code class="inline">velocity</code> vector is replaced by <code class="inline">new_velocity</code>.</p><p>In the <code class="inline">PlayState</code> class, the <code class="inline">goFromStichHit()</code> method is invoked every time the player clicks the screen. When it happens, the mouse cursor is used as the destination for the hit. The result is seen in this demo:</p><figure><iframe src="" width="600" height="480" frameborder="0" scrolling="no"></iframe></figure><h2>Adding the A.I.</h2><p>So far, we've had just a single athlete moving around the rink. As more athletes are added, the AI must be implemented to make all these athletes look like they are alive and thinking.</p><p>In order to achieve that, we'll use a stack-based finite state machine (stack-based FSM, for short). As <a href="" rel="external" target="_blank">previously described</a>, FSMs are versatile and useful for implementing AI in games. </p><p>For our hockey game, a property named <code class="inline">mBrain</code> will be added to the <code class="inline">Athlete</code> class:</p><pre class="brush: actionscript3 noskimlinks noskimwords">public class Athlete { // (...) private var mBrain :StackFSM; // controls the AI stuff public function Athlete(thePosX :Number, thePosY :Number, theTotalMass :Number) { // (...) mBrain = new StackFSM(); } // (...) }</pre><p>This property is an instance of <code class="inline">StackFSM</code>, a class previously used in the <a href="" rel="external" target="_blank">FSM tutorial</a>. It uses a stack to control the AI states of an entity. Every state is described as a method; when a state is pushed into the stack, it becomes the <i>active</i> method and is called during every game update.</p><p>Each state will perform a specific task, such as moving the athlete towards the puck. Every state is responsible for ending itself, which means it is responsible for popping itself from the stack.</p><p>The athlete can be controlled by the player or by the AI now, so the <code class="inline">update()</code> method in the <code class="inline">Athlete</code> class must be modified to check that situation:</p><pre class="brush: actionscript3 noskimlinks noskimwords">public class Athlete { // (...) public function update():void { // Clear all steering forces mBoid.steering = null; if (mControlledByAI) { // The athlete is controlled by the AI. Update the brain (FSM) and // stay away from rink walls. mBrain.update(); } else { // The athlete is controlled by the player, so just follow // the mouse cursor. followMouseCursor(); } // Update all steering stuff mBoid.update(); } }</pre><p>If the AI is active, <code class="inline">mBrain</code> is updated, which invokes the currently active state method, making the athlete behave accordingly. If the player is in control, <code class="inline">mBrain</code> is ignored all together and the athlete moves as guided by the player. </p><p>Regarding the states to push into the brain: for now let's implement just two of them. One state will let an athlete prepare himself for a match; when preparing for the match, an athlete will move to his position in the rink and stand still, staring at the puck. The other state will make the athlete simply stand still and stare at the puck.</p><p>In the next sections, we'll implement these states.</p><h3>The Idle State</h3><p>If the athlete is in the <code class="inline">idle</code> state, he will stop moving and stare at the puck. This state is used when the athlete is already in position in the rink and is waiting for something to happen, like the start of the match.</p><p>The state will be coded in the <code class="inline">Athlete</code> class, under the <code class="inline">idle()</code> method:</p><pre class="brush: actionscript3 noskimlinks noskimwords">public class Athlete { // (...) public function Athlete(thePosX :Number, thePosY :Number, theTotalMass :Number, theTeam :FlxGroup) { // (...) // Tell the brain the current state is 'idle' mBrain.pushState(idle); } private function idle() :void { var aPuck :Puck = getPuck(); stopAndlookAt(aPuck.position); } private function stopAndlookAt(thePoint :Vector3D) :void { mBoid.velocity = thePoint - mBoid.position; mBoid.velocity = normalize(mBoid.velocity) * 0.01; } }</pre><p>Since this method doesn't pop itself from the stack, it will remain active forever. In the future, this state will pop itself to make room for other states, such as <i>attack</i>, but for now it does the trick.</p><p>The <code class="inline">stopAndStareAt()</code> method follows the same principle used to calculate the puck's velocity after a hit. A vector from the athlete's position to the puck's position is calculated by <code class="inline">thePoint - mBoid.position</code> and used as the athlete's new velocity vector.</p><p>That new velocity vector will move the athlete towards the puck. To ensure that the athlete will not move, the vector is scaled by <code class="inline">0.01</code> , "shrinking" its length to almost zero. It makes the athlete stop moving, but keeps him staring at the puck.</p><h3>Preparing For a Match</h3><p>If the athlete is in the <code class="inline">prepareForMatch</code> state, he will move towards his initial position, smoothly stopping there. The initial position is where the athlete should be right before the match starts. Since the athlete should stop at the destination, the arrival behavior can be used again:</p><pre class="brush: actionscript3 noskimlinks noskimwords">public class Athlete { // (...) private var mInitialPosition :Vector3D; // the position in the rink where the athlete should be placed public function Athlete(thePosX :Number, thePosY :Number, theTotalMass :Number, theTeam :FlxGroup) { // (...) mInitialPosition = new Vector3D(thePosX, thePosY); // Tell the brain the current state is 'idle' mBrain.pushState(idle); } private function prepareForMatch() :void { mBoid.steering = mBoid.steering + mBoid.arrive(mInitialPosition, 80); // Am I at the initial position? if (distance(mBoid.position, mInitialPosition) <= 5) { // I'm in position, time to stare at the puck. mBrain.popState(); mBrain.pushState(idle); } } // (...) }</pre><p>The state uses the arrival behavior to move the athlete towards the initial position. If the distance between the athlete and his initial position is less than <code class="inline">5</code>, it means the athlete has arrived at the desired place. When this happens, <code class="inline">prepareForMatch</code> pops itself from the stack and pushes <code class="inline">idle</code>, making it the new active state.</p><p>Below is the result of using a stack-based FSM to control several athletes. Press <code class="inline">G</code> to place them at random positions in the rink, pushing the <code class="inline">prepareForMatch</code> state:</p><figure><iframe src="" width="600" height="480" frameborder="0" scrolling="no"></iframe></figure><p></p><hr><h2>Conclusion</h2><p>This tutorial presented the foundations to implement a hockey game using <a href="" rel="external" target="_blank">steering behaviors</a> and <a href="" rel="external" target="_blank">stack-based finite state machines</a>. Using a combination of those concepts, an athlete is able to move in the rink, following the mouse cursor. The athlete can also hit the puck towards a destination.</p><p>Using two states and a stack-based FSM, the athletes can re-organize and move to their position in the rink, preparing for the match.</p><p>In the next tutorial, you will learn how to make the athletes attack, carrying the puck towards the goal while avoiding opponents.</p><div> <h2>References</h2> <ul> <li>Sprite: <a href="">Hockey Stadium</a> on GraphicRiver</li> <li>Sprites: <a href="">Hockey Players</a> by Taylor J Glidden</li> </ul> </div> 2014-05-21T18:11:30.145Z 2014-05-21T18:11:30.145Z Fernando Bevilacqua tag:gamedevelopment.tutsplus.com,2005:PostPresenter/gamedev-849 Understanding Steering Behaviors: Seek <p>Steering behaviors aim to help autonomous characters move in a realistic manner, by using simple forces that are combined to produce life-like, improvisational navigation around the characters' environment. In this tutorial I will cover the basic theory behind the <em>seek</em> steering behavior, as well as its implementation.</p> <p>The ideas behind these behaviors were proposed by <a href="">Craig W. Reynolds</a>; they are not based on complex strategies involving path planning or global calculations, but instead use local information, such as neighbors' forces. This makes them simple to understand and implement, but still able to produce very complex movement patterns.</p> <p><em><strong>Note:</strong> Although this tutorial is written using AS3 and Flash, you should be able to use the same techniques and concepts in almost any game development environment. You must have a basic understanding of math vectors.</em></p> <hr> <h2>Position, Velocity and Movement</h2> <p>The implementation of all forces involved in steering behaviors can be achieved using math vectors. Since those forces will influence the character's velocity and position, it's a good approach to use vectors to represent them as well.</p> <p>Even though a vector has a <em>direction</em>, it will be ignored when related to position (let's assume the position vector is pointing to the character's current location).</p> <div><img alt="Character positioned at x y with velocity a b" data-</div> <p>The figure above represents a character positioned at <code>(x, y)</code> with a velocity <code>(a, b)</code>. The movement is calculated using <a href="" target="_blank">Euler integration</a>:</p> <pre class="brush: actionscript3 noskimlinks noskimwords">position = position + velocity</pre> <p.</p> <div> <br>Move the mouse to move the target.</div> <p>The red square is moving towards a target (the mouse cursor). This movement pattern illustrates the <em>seek</em> behavior <strong>without</strong> any steering forces being applied so far. The green line represents the velocity vector, calculated as follows:</p> <pre class="brush: actionscript3 noskimlinks noskimwords">velocity = normalize(target - position) * max_velocity</pre> <p>It's important to notice that without the steering force, the character describes straight routes and it instantly changes its direction when the target moves, thus making an abrupt transition between the current route and the new one.</p> <hr> <h2>Calculating Forces</h2> <p>If there was only the velocity force involved, the character would follow a straight line defined by the direction of that vector. One of the ideas of steering behaviors is to influence the character's movement by adding forces (called <em>steering forces</em>). Depending on those forces, the character will move in one or another direction.</p> <p>For the seek behavior, the addition of steering forces to the character every frame makes it smoothly adjust its velocity, avoiding sudden route changes. If the target moves, the character will <em>gradually</em> change its velocity vector, trying to reach the target at its new location.</p> <p>The seek behavior involves two forces: <em>desired velocity</em> and <em>steering</em>:</p> <div><img alt="Steering forces" data-</div> <p>The <em>desired velocity</em> is a force that guides the character towards its target using the shortest path possible (straight line between them - previously, this was the only force acting on the character). The <em>steering</em> force is the result of the desired velocity subtracted by the current velocity and it pushes the character towards the target as well. </p> <p>Those forces are calculated as follows:</p> <pre class="brush: actionscript3 noskimlinks noskimwords">desired_velocity = normalize(target - position) * max_velocity steering = desired_velocity - velocity</pre> <hr> <h2>Adding Forces</h2> <p <strong>seek path</strong> (orange curve in the figure below):</p> <div><img alt="Seek path" data-</div> <p>The addition of those forces and the final velocity/position calculation are:</p> <pre class="brush: actionscript3 noskimlinks noskimwords">steering = truncate (steering, max_force) steering = steering / mass velocity = truncate (velocity + steering , max_speed) position = position + velocity</pre> <p:</p> <div> <br>Move the mouse to move the target.</div> <p>Every time the target moves, each character's <em>desired velocity</em> vector changes accordingly. The <em>velocity</em> vector, however, takes some time to change and start pointing at the target again. The result is a smooth movement transition.</p> <hr> <h2>Conclusion</h2> <p>Steering behaviors are great for creating realistic movement patterns. The main idea is to use local information to calculate and apply forces to create the behaviors. Even though the calculation is simple to implement, it is still able to produce very complex results.</p> <p>This tutorial described the basics of steering behaviors, explaining the seek behavior. Over the next few posts, we will learn about more behaviors. Check out the next post: <a href="">Flee and Arrival</a>.</p> 2012-10-19T14:30:55.000Z 2012-10-19T14:30:55.000Z Fernando Bevilacqua
|
https://gamedevelopment.tutsplus.com/categories/steering-behaviors.atom
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
The Builder Design Pattern provides us with a series of methods that can act as an aid in order to let the consumer of your class better understand what is happening under the hood.
But it is considered an Anti-Pattern by some developers.
Why use Builder Design Pattern?
It’s an alternative to using multiple constructors by overloading them with more and more parameters.
What are the pros?
- There are tools that allow you to get hints of what the name of the parameter is, which may help if their names are informative, but it is not always the case. Like, for example: reading code on GitHub or another IDE/editor.
- You don’t need to have all the data required to pass it to your object right when you initialize it.
What are the cons?
- You can end up having methods that require others to be ran in a certain order; otherwise, the consumer will run into issues if you implement it wrong.
- The chain of methods can get really long depending on the implementation.
- The consumer may forget to finish up the statement with the build()method and not get the results they expected.
- Uses more memory resources.
How can we avoid it?
Default Parameters
Kotlin supports default parameters (Which is also available in other languages like C# and JavaScript).
fun pow(base: Int, power: Int = 2): Int { // ... }
This can work as an alternative to method overloading, since the consumer of this method can use it like this:
pow(2) // outputs 4 pow(2, 3) // outputs 8
Which can make our life easier by only having to maintain a single method.
Named Arguments
This allows our consumers to not only type in exactly what argument they want to assign to an exact parameter, but we can also reorder them in whatever way we want. This can be handy when dealing with “legacy code” that can be hard to understand because of how many parameters requires.
pow(base = 4, power = 3) // outputs 64 pow(power = 3, base = 4) // also outputs 64
Can we combine them?
Of course, my dear Watson!
pow(base = 3) // outputs 9
In the example above we are passing the base argument as a named argument and the power is using the default parameter value in our method signature.
Now that we know how to do this, we can use them to avoid the Builder Design Pattern in Kotlin.
First, let’s check out the code we would have to write in Java by creating a simplified version of a Hamburger class:
// Hamburger.java public class HamburgerJava { private final boolean hasKetchup; private final boolean hasTomatoes; private final int meats; private HamburgerJava(Builder builder) { hasKetchup = builder.hasKetchup; hasTomatoes = builder.hasTomatoes; meats = builder.meats; } public static class Builder { private boolean hasKetchup; private boolean hasTomatoes; private int meats; public Builder(int meats) { if (meats > 3) throw new IllegalArgumentException("Cannot order hamburger with more than 3 meats"); if (meats < 1) throw new IllegalArgumentException("A hamburger must have at least 1 meat."); this.meats = meats; } public Builder addKetchup(boolean hasKetchup) { this.hasKetchup = hasKetchup; return this; } public Builder addTomatoes(boolean hasTomatoes) { this.hasTomatoes = hasTomatoes; return this; } public HamburgerJava build() { return new HamburgerJava(this); } } }
Now, let’s see how we can use our HamburgerJava class:
HamburgerJava doubleMeatWithEverything = new HamburgerJava.Builder(2) .addKetchup(true) .addTomatoes(true) .build();
Cool, maybe some of you guys are used to this.
Now let’s take a look at the Kotlin implementation in our Hamburger class:
// Hamburger.kt class Hamburger( val meats: Int, val hasKetchup: Boolean = false, val hasTomatoes: Boolean = false )
Let’s see how it looks when we try to use it:
val doubleMeatWithEverything = Hamburger( meats = 2, hasKetchup = true, hasTomatoes = true )
By using named arguments and default parameters we can avoid the problem of not knowing what values we are being passed to each parameter and have only a single constructor to maintain.
The only cons we have with this approach is that we lose the ability to pass in values to our object after it’s creation.
Which is something I don’t think is that common, or is it?
Source:
|
https://learningactors.com/avoiding-the-builder-design-pattern-in-kotlin/
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
#include <subcompositor.h>
Detailed Description
Wrapper for the wl_subcompositor interface.
This class is a convenient wrapper for the wl_subcompositor interface. The main purpose of this class is to create SubSurfaces.
To create an instance use Registry::createSubCompositor.
Definition at line 49 of file subcompositor.h.
Member Function Documentation
Creates and setup a new SubSurface with
parent.
- Parameters
-
Definition at line 68 of file subcompositor.cpp.
Destroys the data held by this SubCompositor. wl_subcompositor interface once there is a new connection available.
Definition at line 56 of file subcompositor.cpp.
- Returns
- The event queue to use for creating a SubSurface.
Definition at line 95 of file subcompositor.cpp.
- Returns
trueif managing a wl_subcompositor.
Definition at line 80 of file subcompositor.cpp.
Releases the wl_subcompositor interface.
After the interface has been released the SubCompositor instance is no longer valid and can be setup with another wl_subcompositor interface.
Definition at line 51 of file subcompositor.cpp.
The corresponding global for this interface on the Registry got removed.
This signal gets only emitted if the Compositor got created by Registry::createSubCompositor
- Since
- 5.5
Sets the
queue to use for creating a SubSurface.
Definition at line 100 of file subcompositor.cpp.
Setup this SubCompositor to manage the
subcompositor.
When using Registry::createSubCompositor there is no need to call this method.
Definition at line 61 of file subcompositor.cpp.
The documentation for this class was generated from the following files:
Documentation copyright © 1996-2020 The KDE developers.
Generated on Sun Feb 16 2020 04:42:30 by doxygen 1.8.11 written by Dimitri van Heesch, © 1997-2006
KDE's Doxygen guidelines are available online.
|
https://api.kde.org/frameworks/kwayland/html/classKWayland_1_1Client_1_1SubCompositor.html
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
Due by 5pm on Wednesday, 2/5.
Submission. See the online submission instructions.
Readings. All problems in this homework can be solved with the subset of Python 3 introduced in sections 1.2, 1.3, 1.4, and 1.5 of the Composing Programs online text.
Welcome Survey. Please complete our welcome survey. The survey is for the instructors to get to know their students better, and your responses will be kept confidential to the staff. The survey is due by 5pm on Tuesday, 2/4.
Q1.: op = _____ else: op = _____ return op(a, b)
Q2. of a, b, c. >>> two_of_three(1, 2, 3) 13 >>> two_of_three(5, 3, 1) 34 >>> two_of_three(10, 2, 8) 164 >>> two_of_three(5, 5, 5) 50 """ "*** YOUR CODE HERE ***" does not do the same thing as an if statement in all cases. To prove this fact, write functions c, t, and f such that with_if_statement returns the number 1, but with_if_function does not:
def with_if_statement(): if c(): return t() else: return f() def with_if_function(): return if_function(c(), t(), f()) def c(): "*** YOUR CODE HERE ***" def t(): "*** YOUR CODE HERE ***" def f(): "*** YOUR CODE HERE ***"
Q4. Douglas Hofstadter’s Pulitzer-prize-winning book, Gödel, Escher, Bach, poses the following mathematical puzzle.
The number n will travel up and down but eventually end at 1 (at least for all numbers that have ever been tried -- nobody has ever proved that the sequence will terminate). Analogously,) # Seven elements are 10, 5, 16, 8, 4, 2, 1 10 5 16 8 4 2 1 >>> a 7 """ "*** YOUR CODE HERE ***"
Hailstone sequences can get quite long! Try 27. What's the longest you can find?
|
https://inst.eecs.berkeley.edu/~cs61a/sp14/hw/hw1.html
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
A Python package to fake SOC (Security Operations Center) data
Project description
soc-faker
soc-faker is used to generate fake data for use by Security Operation Centers, Information security professionals, product teams, and many more.
Getting Started
soc-faker is compatible with Python 2.x and 3.x. You can install
soc-faker using
pip as well as cloning this repository directly.
At the time of writing this document,
soc-faker has the ability to fake data for the following main categories. You can find specific details for each category by selecting the links below:
- Alert
- Computer
- Application
- Employee
- File
- Logs
- Network
- Organization
- Products
- User Agent
- Vulnerability
- Registry
- Timestamp
Installing soc-faker
pip install soc-faker --user
Installing from source
git clone git@github.com:swimlane/soc-faker.git cd soc-faker python setup.py install
Prerequisites
The following libraries are required and installed by soc-faker
requests pendulum ipaddress Pillow networkx matplotlib PyGithub PyYAML Faker
GitHub PAT
In addition, you must provide a GitHub Personal Access Token to utilize specific features that rely on data from public github repositories.
Please follow this guide to get a personal access token
Once you have a PAT you can provide this token during initialization of the the
SocFaker object:
from socfaker import SocFaker sf = SocFaker(github_token='YOUR PERSONAL ACCESS TOKEN')
Development
You can use the provided Dockerfile to get a development and testing environment up and running for
soc-faker.
To use the
Dockerfile run, cd to this repositories directory and run:
docker build --force-rm -t socfaker .
Once it is built, then run the docker container:
docker run socfaker
Running this will call the test python file in bin\test.py. Modify this file for additional testing and development.
Running the tests
Tests within this project should cover all available properties and methods. As this project grows the tests will become more robust but for now we are testing that they exist and return outputs.
Built With
Contributing
Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.
Versioning
We use SemVer for versioning.
Change Log
Please read CHANGELOG.md for details on features for a specific version of
soc-faker
Authors
- Josh Rickard - Initial work - MSAdministrator
- Nick Tausek
See also the list of contributors who participated in this project.
License
This project is licensed under the MIT License - see the LICENSE file for details
Credits
soc-faker.
Acknowledgments
- This project utilizes data from the OSSEM project by hunters-forge
.. toctree:: :maxdepth: 2 :caption: Contents: docs/source/faker/application docs/source/faker/azure docs/source/faker/computer docs/source/faker/elastic docs/source/faker/employee docs/source/faker/file docs/source/faker/logs docs/source/faker/network docs/source/faker/organization docs/source/faker/qualysguard docs/source/faker/servicenow docs/source/faker/useragent docs/source/faker/vulnerability
TODO
Employee
- [ ] Manager (Employee Object)
Date
- [ ] Date Between
- [ ] Date X periods back (date after 1/1/2018)
- [ ] Date X per. Forward (date after 1/1/2018)
- [ ] Duration/Span
Address
- [ ] Physical Address?
Network
- [ ] URL
File Info
- [ ] fuzzy?
- [ ] File Path
- [ ] File Reputation?
PCAP
- [ ] Generate Fake PCAP files
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/soc-faker/
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
ilya-biryukov added a comment. In, @simark wrote:
> It seems to me like > > changes in command line arguments (adding -DMACRO=1) are not taken into > account > when changing configuration. It looks like a bug in the preamble handling. (It does not check if macros were redefined). You can workaround that by making sure the preamble ends before your code starts (preamble only captures preprocessor directives, so any C++ decl will end it): Annotations SourceAnnotations(R"cpp( int avoid_preamble; #ifndef MACRO $before[[static void bob() {}]] #else $after[[static void bob() {}]] #endif /// .... )cpp" ================ Comment at: unittests/clangd/XRefsTests.cpp:260 +TEST(DidChangeConfiguration, DifferentDeclaration) { + Annotations SourceAnnotations(R"cpp( ---------------- I'd move it to `ClangdTests.cpp`, generic `ClangdServer` tests usually go there. It's fine to `#include "Annotations.h"` there, too, even though it hasn't been used before. ================ Comment at: unittests/clangd/XRefsTests.cpp:271 + + MockCompilationDatabase CDB(/*UseRelPaths=*/true); + MockFSProvider FS; ---------------- Specifying `/*UseRelPath=*/true` is not necessary for this test, default value should do. Repository: rCTE Clang Tools Extra _______________________________________________ cfe-commits mailing list cfe-commits@lists.llvm.org
|
https://www.mail-archive.com/cfe-commits@lists.llvm.org/msg82844.html
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
Brandon N wrote ..
> I meant seeing as others had pointed out the concern that one shouldn't put
> .py files under htdocs/ or similar directories for fear that someone might
> find access to one's source files, wholly intact.
The reference to putting .py files under htdocs pertained more to any
shared set of modules used by your application. The idea being to make
what code is in a handler module file as minimal as possible with
callouts to separate application modules which do the bulk of the work.
There are a number of reasons for doing this. The first is that in the
event that an Apache configuration is stuffed up and .py files exposed,
that you aren't exposing the bulk of the code of your application. Ie.,
the important stuff where you might hold things like any database
login/password details or pathnames to other files which may contain
sensitive information.
The second reason is to avoid any problems with modules being loaded
both by the standard Python import mechanism and the mod_python module
import mechanism. Mixing the two can cause some issues and it is easier
to avoid the problem by never using "import" to import modules in the
document tree. Best way of doing that is to move shared modules
elsewhere. If you must import a module in the document tree from another
module in the document tree, use apache.import_module() instead.
In terms of security of .py files in the document tree, the risk is
similar as to when using .php or .cgi files. If someone screws up the
Apache configuration in all these cases source code could be exposed.
This sort of issue will possibly more easily occur when mod_python is
configured from the main Apache configuration file. At least if
mod_python is configured from a .htaccess file in the document tree, the
file is adjacent to the source code and the association more easily
seen. When in the main Apache configuration, too easy for someone
to unknowingly remove/disable it, or to wipe it out when upgrading
Apache. With a .htaccess file it will keep working unless FileInfo option
is disabled or use of .htaccess files is disabled. If FileInfo is disabled,
result will from memory be a 500 error so still safe. Disable .htaccess
files though and code can still be exposed.
What I would personally be more worried about is where the user that
Apache runs as has some sort of write access to the document tree.
If it does, then .pyc and .pyo files can be left in the document tree
from when a module is loaded. If AddHandler is used to only map .py
files to mod_python, then the .pyc and .pyo files can be exposed and
downloadable. If someone had the right tools they could decompile
the bytecode and find out something about your source code, including
possibly sensitive details.
Even if the user Apache runs as doesn't have write access to the
document tree, I would always suggest the following be added to
the Apache configuration.
<Files *.pyc>
deny from all
</Files>
<Files *.pyo>
deny from all
</Files>
This will block access to the files if they are created by mod_python
where directories are writable, or if the files are inadvertantly copied
there from another location.
As to keeping handler modules out of the document tree, thus eliminating
the danger they could be exposed, this is not really possible with
mod_python as it stands now. With mod_python 3.2 though, there is
potential for it to be done, although it means writing a special handler
which emulates the way that Apache maps URLs to files. The change that
has been made that makes this possible is that in 3.2, it is possible to
modify the value of req.path_info as well as req.filename. Thus a
handler could reevaluate a URL against a part of the filesystem which
isn't in the document tree and then execute a handler to service the
request against what was found.
As an example, in a new system I am working on, you can write
something like:
import mod_python.publisher
handler = handlers.MapLocationToView(
directory = '/tmp/htdocs',
resource_extension = '.py',
script_extension = '.py',
handler = mod_python.publisher.handler,
)
The MapLocationToView handler will map a URL to a .py file like Apache
does now when AddHandler is used and then trigger the standard
mod_python.publisher handler. The difference is that is this example,
the files all live outside of the document tree in '/tmp/htdocs'. The
Apache configuration itself knows nothing about that directory and its
contents can't be exposed in any way if the Apache configuration is
stuffed up.
Graham
>?
>
> On 10/27/05, Graham Dumpleton <grahamd at dscpl.com.au> wrote:
> >
> > Brandon N wrote ..
> > > I've checked out Vampire, and it would seem to be exactly that which
> I
> > > desire (after only a few minutes of experimentation at least). Does
> one
> > > typically include their .py files with this setup in the public
> > directory
> > > (with indexing and such disabled, naturally)? Or is there a way to
> > reference
> > > files outside of the public system?
> >
> > What do you mean by files? Do you mean the .py files which contain the
> > handlers or other Python helper modules, static files etc?
> >
> > In terms of how most mod_python extensions work, eg, Vampire,
> > mod_python.publisher etc, they rely on the fact that Apache performs
> the
> > mapping of URL to a physical file in the filesystem. Ie., they work out
> > what to do based on what Apache has set req.filename to. In order for
> > Apache to make this determination, the .py files must be in the public
> > directories that Apache is managing. Note though that this doesn't
> > mean they have to be physically under the main Apache document
> > root as you can use the Alias directive or symlinks and the FollowSymLinks
> > directive to locate them in different places but still appear under the
> > public
> > URL namespace.
> >
> > Anyway, if you can be clearer about what you mean, can possibly give
> > a better answer. :-)
> >
> > Graham
> >
> > > Thanks to the both of you with your help. It's cleared up a great deal
> > > for
> > > me.
> > >
> > > Cheers!
> > >
> > > On 10/27/05, Graham Dumpleton <grahamd at dscpl.com.au> wrote:
> > > >
> > > > Jorey Bump wrote ..
> > > > > Brandon N wrote:
> > > > > > A) Is it requestHandler's job to determine which file was
> > requested
> > > > and
> > > > > > respond accordingly (via the request's .filename?) with a switch
> > > > > > construct or equivalent?
> > > > >
> > > > > Yes and no. Apache's already passed the file to the handler based
> on
> > > its
> > > > > extension, presence in a directory, or other criteria. The developer
> > > of
> > > > > the handler gets to decide what the handler does with *whatever*
> is
> > > > > passed to it. Some assume it will contain only valid Python code
> and
> > > > > process it as such (mod_python.publisher, for example). Some might
> > > want
> > > > > to process proprietary or other file formats using python (you
> might
> > > > > make a handler to display Word files, for example), but remain
> > agnostic
> > > > > about the actual filename or extension. But there's no reason why
> > your
> > > > > handler can't branch according to the file extension (which is
> what
> > > > > Graham's Vampire does, if I'm not mistaken).
> > > >
> > > >
> > > >
> > > >
> > > > _______________________________________________
> > > > Mod_python mailing list
> > > > Mod_python at modpython.org
> > > >
> > > >
> >
|
http://modpython.org/pipermail/mod_python/2005-October/019442.html
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
Hello,
I have just started playing around a bit with Python/Tkinter. What I'd like to happen is to draw a circle and then have the circle change its coordinate. The way I was thinking is to have some integer i to be defined 0 and then be updated every frame (ie i+=1), with the drawing algorithm modified appropriately. However, since mainloop() seems to act as a sort of global while(1) loop, this doesn't seem possible. What I have now:
from Tkinter import * root = Tk() def drawcircle(canv,x,y,rad): canv.create_oval(x-rad,y-rad,x+rad,y+rad,width=0,fill='blue') canvas = Canvas(width=600, height=200, bg='white') canvas.pack(expand=YES, fill=BOTH) text = canvas.create_text(50,10, text="tk test") #i'd like to recalculate these coordinates every frame circ1=drawcircle(canvas,100,100,20) circ2=drawcircle(canvas,500,100,20) root.mainloop()
So, how do I go about doing this?
Another question, could anyone suggest a good Tkinter/Python tutorial? What I need is maximally simple and relatively short tutorial that will allow me to make *simple* 2D simulations (ie, my goal for a first Python algorithm is a 1D molecular dynamics simulation using Lennard-Jones potential).
Thank you very much in advance,
Ilya
|
https://www.daniweb.com/programming/software-development/threads/106935/drawing-a-moving-circle-with-python-tkinter-good-gui-tutorial
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
This is the fourth part of the libgdx tutorial in which we create a 2d platformer prototype modeled after Star Guard. You can read up on the previous articles if you are interested in how we got here.
Following the tutorial so far we managed to have a tiny world consisting of some blocks, our hero called Bob who can move around in a nice way but the problem is, he
doesn’t have any interaction with the world. If we switch the tile rendering back we would see Bob happily walking and jumping around without the blocks impending him. All the blocks get ignored. This happens because we never check if Bob actually collides with the blocks. Collision detection is nothing more than detecting when two or more objects collide. In our case we need to detect when Bob collides with the blocks. What exactly is being checked is if Bob’s bounding box intersects with the bounding boxes of their respective blocks. In case it does, we have detected a collision. We take note of the objects (Bob and the block(s)) and act accordingly. In our case we need to stop Bob from advancing, falling or jumping, depending with which side of the block Bob collided with.
The quick and dirty way
The easy and quick way to do it is to iterate through all the blocks in the world and check if the blocks collide with Bob’s current bounding box. This works well in our tiny 10×7 world but if we have a huge world with thousands of blocks, doing the detection every frame becomes impossible without affecting performance.
A better way
To optimise the above solution we will selectively pick the tiles that are potential candidates for collision with Bob.
By design, the game world consists of blocks whose bounding boxes are axis aligned and their width and height are both 1 unit.
In this case our world looks like the following image (all the blocks/tiles are in unit blocks):
The red squares represent the bounds where the blocks would have been placed if any. The yellow ones are placed blocks.
Now we can pick a simple 2 dimensional array (matrix) for our world and each cell will hold a
Block or
null if there is none. This is the map container. We always know where Bob is so it is easy to work out in which cell we are. The easy and lazy way to get the block candidates that Bob can collide with is to pick all the surrounding cells and check if Bob’s current bounding box in overlaps with one of the tiles that has a block.
Because we also control Bob’s movement we have access to his direction and movement speed. This narrows our options down even further. For example if Bob is heading left we have the following scenario:
The above image gives us 2 candidate cells (tiles) to check if the objects in those cells collide with Bob. Remember that gravity is constantly pulling Bob down so we will always have to check for tiles on the Y axis. Based on the vertical velocity’s sign we know when Bob is jumping or falling. If Bob is jumping, the candidate will be the tile (cell) above him. A negative vertical velocity means that Bob is falling so we pick the tile from underneath him as a candidate. If he is heading left (his velocity is < 0) then we pick the candidate on his left. If he’s heading right (velocity > 0) then we pick the tile to his right. If the horizontal velocity is 0 that means we don’t need to bother with the horizontal candidates. We need to make it optimal because we will be doing this every frame and we will have to do this for every enemy, bullet and whatever collideable entities the game will have.
What happens upon collision?
This is very simple in our case. Bob’s movement on that axis stops. His velocity on that axis will be set to 0. This can be done only if the 2 axis are checked separately. We will check for the horizontal collision first and if Bob collides, then we stop his horizontal movement.
We do the exact same thing on the vertical (Y) axis. It is simple as that.
Simulate first and render after
We need to be careful when we check for collision. We humans tend to think before we act. If we are facing a wall, we don’t just walk into it, we see and we estimate the distance and we stop before we hit the wall. Imagine if you were blind. You would need a different sensor than your eye. You would use your arm to reach out and if you feel the wall, you’d stop before you walked into it. We can translate this to Bob, but instead of his arm we will use his bounding box. First we displace his bounding box on the X axis by the distance it would have taken Bob to move according to his velocity and check if the new position would hit the wall (if the bounding box intersects with the block’s bounding box). If yes, then a collision has been detected. Bob might have been some distance away from the wall and in that frame he would have covered the distance to the wall and some more. If that’s the case, we will simply position Bob next to the wall and align his bounding box with the current position. We also set Bob’s speed to 0 on that axis. The following diagram is an attempt to show just what I have described.
The green box is where Bob currently stands. The displaced blue box is where Bob should be after this frame. The purple are is how much Bob is into the wall. That is the distance we need to push Bob back so he stands next to the wall. We just set his position next to the wall to achieve this without too much computation. The code for collision detection is actually very simple. It all resides in the
BobController.java. There are a few other changes too which I should mention prior to the controller. The
World.java has the following changes:
public class World { /** Our player controlled hero **/ Bob bob; /** A world has a level through which Bob needs to go through **/ Level level; /** The collision boxes **/ Array<Rectangle> collisionRects = new Array<Rectangle>(); // Getters ----------- public Array<Rectangle> getCollisionRects() { return collisionRects; } public Bob getBob() { return bob; } public Level getLevel() { return level; } /** Return only the blocks that need to be drawn **/ public List<Block> getDrawableBlocks(int width, int height) { int x = (int)bob.getPosition().x - width; int y = (int)bob.getPosition().y - height; if (x < 0) { x = 0; } if (y < 0) { y = 0; } int x2 = x + 2 * width; int y2 = y + 2 * height; if (x2 > level.getWidth()) { x2 = level.getWidth() - 1; } if (y2 > level.getHeight()) { y2 = level.getHeight() - 1; } List<Block> blocks = new ArrayList<Block>(); Block block; for (int col = x; col <= x2; col++) { for (int row = y; row <= y2; row++) { block = level.getBlocks()[col][row]; if (block != null) { blocks.add(block); } } } return blocks; } // -------------------- public World() { createDemoWorld(); } private void createDemoWorld() { bob = new Bob(new Vector2(7, 2)); level = new Level(); } }
#09 –
collisionRects is just a simple array where I will put the rectangles Bob is colliding with in that particular frame. This is only for debug purposes and to show the boxes on the screen. It can and will be removed from the final game.
#13 – Just provides access to the collision boxes
#23 –
getDrawableBlocks(int width, int height) is the method that returns the list of
Block objects that are in the camera’s window and will be rendered. This method is just to prepare the application to render huge worlds without performance loss. It’s a very simple algorithm. Get the blocks surrounding Bob within a distance and return those to render. It’s an optimisation.
#61 – Creates the
Level declared in line #06. It’s good to move out the level from the world as we want our game to have multiple levels. This is the obvious first step. The
Level.java can be found here.
As I mentioned before, the actual collision detection is in
BobController.java
public class BobController { // ... code omitted ... // private Array<Block> collidable = new Array<Block>(); // ... code omitted ... // public void update(float delta) { processInput(); if (grounded && bob.getState().equals(State.JUMPING)) { bob.setState(State.IDLE); } bob.getAcceleration().y = GRAVITY; bob.getAcceleration().mul(delta); bob.getVelocity().add(bob.getAcceleration().x, bob.getAcceleration().y); checkCollisionWithBlocks(delta); bob.getVelocity().x *= DAMP; if (bob.getVelocity().x > MAX_VEL) { bob.getVelocity().x = MAX_VEL; } if (bob.getVelocity().x < -MAX_VEL) { bob.getVelocity().x = -MAX_VEL; } bob.update(delta); } private void checkCollisionWithBlocks(float delta) { bob.getVelocity().mul(delta); Rectangle bobRect = rectPool.obtain(); bobRect.set(bob.getBounds().x, bob.getBounds().y, bob.getBounds().width, bob.getBounds().height); int startX, endX; int startY = (int) bob.getBounds().y; int endY = (int) (bob.getBounds().y + bob.getBounds().height); if (bob.getVelocity().x < 0) { startX = endX = (int) Math.floor(bob.getBounds().x + bob.getVelocity().x); } else { startX = endX = (int) Math.floor(bob.getBounds().x + bob.getBounds().width + bob.getVelocity().x); } populateCollidableBlocks(startX, startY, endX, endY); bobRect.x += bob.getVelocity().x; world.getCollisionRects().clear(); for (Block block : collidable) { if (block == null) continue; if (bobRect.overlaps(block.getBounds())) { bob.getVelocity().x = 0; world.getCollisionRects().add(block.getBounds()); break; } } bobRect.x = bob.getPosition().x; startX = (int) bob.getBounds().x; endX = (int) (bob.getBounds().x + bob.getBounds().width); if (bob.getVelocity().y < 0) { startY = endY = (int) Math.floor(bob.getBounds().y + bob.getVelocity().y); } else { startY = endY = (int) Math.floor(bob.getBounds().y + bob.getBounds().height + bob.getVelocity().y); } populateCollidableBlocks(startX, startY, endX, endY); bobRect.y += bob.getVelocity().y; for (Block block : collidable) { if (block == null) continue; if (bobRect.overlaps(block.getBounds())) { if (bob.getVelocity().y < 0) { grounded = true; } bob.getVelocity().y = 0; world.getCollisionRects().add(block.getBounds()); break; } } bobRect.y = bob.getPosition().y; bob.getPosition().add(bob.getVelocity()); bob.getBounds().x = bob.getPosition().x; bob.getBounds().y = bob.getPosition().y; bob.getVelocity().mul(1 / delta); } private void populateCollidableBlocks(int startX, int startY, int endX, int endY) { collidable.clear(); for (int x = startX; x <= endX; x++) { for (int y = startY; y <= endY; y++) { if (x >= 0 && x < world.getLevel().getWidth() && y >=0 && y < world.getLevel().getHeight()) { collidable.add(world.getLevel().get(x, y)); } } } } // ... code omitted ... // }
The full source code is on github and I have tried to document it but I will go through the important bits here.
#03 – the
collidable array will hold each frame the blocks that are the candidates for collision with Bob.
The
update method is more concise now.
#07 – processing the input as usual and nothing changed there
#08 – #09 – resets Bob’s state if he’s not in the air.
#12 – Bob’s acceleration is transformed to the frame time. This is important as a frame can be very small (usually 1/60 second) and we want to do this conversion just once in a frame.
#13 – compute the velocity in frame time
#14 – is highlighted because this is where the collision detection is happening. I’ll go through that method in a bit.
#15 – #22 – Applies the DAMP to Bob to stop him and makes sure that Bob is not exceeding his maximum velocity.
#25 – the
checkCollisionWithBlocks(float delta) method which sets Bob’s states, position and other parameters based on his collision or not with the blocks in the level.
#26 – transform velocity to frame time
#27 – #28 – We use a Pool to obtain a Rectangle which is a copy of Bob’s current bounding box. This rectangle will be displaced where bob should be this frame and checked against the candidate blocks.
#29 – #36 – These lines identify the start and end coordinates in the level matrix that are to be checked for collision. The level matrix is just a 2 dimensional array and each cell represents one unit so can hold one block. Check
Level.java
#31 – The Y coordinate is set since we only look for the horizontal for now.
#32 – checks if Bob is heading left and if so, it identifies the tile to his left. The math is straight forward and I used this approach so if I decide that I need some other measurements for cells, this will still work.
#37 – populates the
collidable array with the blocks within the range provided. In this case is either the tile on the left or on the right, depending on Bob’s bearing. Also note that if there is no block in that cell, the result is null.
#38 – this is where we displace the copy of Bob’s bounding box. The new position of
bobRec is where Bob should be in normal circumstances. But only on the X axis.
#39 – remember the collisionRects from the world for debugging? We clear that array now so we can populate it with the rectangles that Bob is colliding with.
#40 – #47 – This is where the actual collision detection on the X axis is happening. We iterate through all the candidate blocks (in our case will be 1) and check if the block’s bounding box intersects Bob’s displaced bounding box. We use the
bobRect.overlaps method which is part of the Rectangle class in libgdx and returns true if the 2 rectangles overlap. If there is an overlap, we have a collision so we set Bob’s velocity to 0 (line #43 add the rectangle to the
world.collisionRects and break out of the detection.
#48 – We reset the bounding box’s position because we are moving to check collision on the Y axis disregarding the X.
#49 – #68 – is exactly the same as before but it happens on the Y axis. There is one additional instruction #61 – #63 and that sets the
grounded state to
true if a collision was detected when Bob was falling.
#69 – Bob’s rectangle copy is reset
#70 – Bob’s new velocity is being set which will be used to compute Bob’s new position.
#71 – #72 – Bob’s real bounds’ position is updated
#73 – We transform the velocity back to the base measurement units. This is very important.
And that is all for the collision of Bob with the tiles. Of course we will evolve this as more entities are added but for now is as good as it gets. We cheated here a bit as in the diagram I stated that I will place Bob next to the Block when colliding but in the code I completely ignore the replacing. Because the distance is so tiny that we can’t even see it, it’s OK. It can be added, it won’t make much difference. If you decide to add it, make sure sure you set Bob’s position next next to the Block, a tiny bit farther so the overlap function will result
false. There is a small addition to the
WorldRenderer.java too.
public class WorldRenderer { // ... code omitted ... // public void render() { spriteBatch.begin(); drawBlocks(); drawBob(); spriteBatch.end(); drawCollisionBlocks(); if (debug) drawDebug(); } private void drawCollisionBlocks() { debugRenderer.setProjectionMatrix(cam.combined); debugRenderer.begin(ShapeType.FilledRectangle); debugRenderer.setColor(new Color(1, 1, 1, 1)); for (Rectangle rect : world.getCollisionRects()) { debugRenderer.filledRect(rect.x, rect.y, rect.width, rect.height); } debugRenderer.end(); } // ... code omitted ... // }
The addition of the
drawCollisionBlocks() method which draws a white box wherever the collision is happening. It’s all for your viewing pleasure. The result of the work we put in so far should be similar to this video:
This article should wrap up basic collision detection. Next we will look at extending the world, camera movement, creating enemies, using weapons, adding sound. Please share your ideas what should come first as all are important. The source code for this project can be found here:. You need to checkout the branch part4. To check it out with git:
git clone -b part4 git@github.com:obviam/star-assault.git. You can also download it as a zip file. There is also a nice platformer in the libgdx tests directory. SuperKoalio. It demonstrates a lot of things I have covered so far and it’s much shorter and for the ones with some libgdx experience it is very helpful.
Reference: Android Game Development with libgdx – Collision Detection, Part 4 from our JCG partner Impaler at the Against the Grain blog.
This is an awesome tutorial. By the way, will this collision method work even if the player sprite is bigger than the tile sprite? I’m asking that because I’ve found some issues while working with collisions in Java2D.
Thanks in advance!
continue the tutorial please or at least give the source code i will try it out my self please
i like the tut very much……….
|
https://www.javacodegeeks.com/2013/03/android-game-development-with-libgdx-collision-detection-part-4.html
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
How to install 'OAuth' for PHP
Ok so I am progressing...
I have installed PHP Version 5.6.8 and wish to use the PHP OAuth Extention to use the Twitter API but it says Class 'OAuth' not found.
I'm new to Linux so unsure what command I need to type in to install it.
Can anyone help please?
Hi @Brian-Moreau, OAuth in PHP is supplied by the PECL package. Can you try installing it with the following commands:
opkg update opkg install php5-pecl
- Brian Moreau
Unknown package 'php5-pecl' root@Omega-2774:/# opkg update Downloading. Updated list of available packages in /var/opkg-lists/chaos_calmer_base. Downloading. Signature check passed. Downloading. Updated list of available packages in /var/opkg-lists/chaos_calmer_packages. Downloading. Signature check passed. root@Omega-2774:/# opkg install php5-pecl Unknown package 'php5-pecl'. Collected errors: * opkg_install_cmd: Cannot install package php5-pecl. root@Omega-2774:/#
- Brian Moreau
Ok so I ran an 'opkg list' command and got listed 100's of packages.
PECL is not there...
Another forum suggested I might need PEAR?
This is a list of most of the PHP packages...
php5-mod-calendar - 5.6.8-1 - Calendar shared module php5-mod-ctype - 5.6.8-1 - Ctype shared module php5-mod-curl - 5.6.8-1 - cURL shared module php5-mod-dom - 5.6.8-1 - DOM shared module php5-mod-exif - 5.6.8-1 - EXIF shared module php5-mod-fileinfo - 5.6.8-1 - Fileinfo shared module php5-mod-ftp - 5.6.8-1 - FTP shared module php5-mod-gd - 5.6.8-1 - GD graphics shared module php5-mod-gettext - 5.6.8-1 - Gettext shared module php5-mod-gmp - 5.6.8-1 - GMP shared module php5-mod-hash - 5.6.8-1 - Hash shared module php5-mod-iconv - 5.6.8-1 - iConv shared module php5-mod-json - 5.6.8-1 - JSON shared module php5-mod-ldap - 5.6.8-1 - LDAP shared module php5-mod-mbstring - 5.6.8-1 - MBString shared module php5-mod-mcrypt - 5.6.8-1 - Mcrypt shared module php5-mod-mysql - 5.6.8-1 - MySQL shared module php5-mod-mysqli - 5.6.8-1 - MySQL Improved Extension shared module php5-mod-openssl - 5.6.8-1 - OpenSSL shared module php5-mod-pcntl - 5.6.8-1 - PCNTL shared module php5-mod-pdo - 5.6.8-1 - PHP Data Objects shared module php5-mod-pdo-mysql - 5.6.8-1 - PDO driver for MySQL shared module php5-mod-pdo-pgsql - 5.6.8-1 - PDO driver for PostgreSQL shared module php5-mod-pdo-sqlite - 5.6.8-1 - PDO driver for SQLite 3.x shared module php5-mod-pgsql - 5.6.8-1 - PostgreSQL shared module php5-mod-session - 5.6.8-1 - Session shared module php5-mod-shmop - 5.6.8-1 - Shared Memory shared module php5-mod-simplexml - 5.6.8-1 - SimpleXML shared module php5-mod-soap - 5.6.8-1 - SOAP shared module php5-mod-sockets - 5.6.8-1 - Sockets shared module php5-mod-sqlite3 - 5.6.8-1 - SQLite3 shared module php5-mod-sysvmsg - 5.6.8-1 - System V messages shared module php5-mod-sysvsem - 5.6.8-1 - System V shared memory shared module php5-mod-sysvshm - 5.6.8-1 - System V semaphore shared module php5-mod-tokenizer - 5.6.8-1 - Tokenizer shared module php5-mod-xml - 5.6.8-1 - XML shared module php5-mod-xmlreader - 5.6.8-1 - XMLReader shared module php5-mod-xmlwriter - 5.6.8-1 - XMLWriter shared module php5-mod-zip - 5.6.8-1 - ZIP shared module
@Brian-Moreau They might have removed the package from the latest build. Are you familiar with the cross compile environment? You should be able to compile the php5-pecl package with the following make file:
# # Copyright (C) 2011-2014 OpenWrt.org # # This is free software, licensed under the GNU General Public License v2. # See /LICENSE for more information. # define Package/php5-pecl/Default SUBMENU:=PHP SECTION:=lang CATEGORY:=Languages URL:= MAINTAINER:=Michael Heimpold <mhei@heimpold.de> DEPENDS:=php5 endef define Build/Configure ( cd $(PKG_BUILD_DIR); $(STAGING_DIR_HOST)/usr/bin/phpize ) $(Build/Configure/Default) endef CONFIGURE_ARGS+= \ --with-php-config=$(STAGING_DIR_HOST)/usr/bin/php-config define PECLPackage define Package/php5-pecl-$(1) $(call Package/php5-pecl/Default) TITLE:=$(2) ifneq ($(3),) DEPENDS+=$(3) endif endef define Package/php5-pecl-$(1)/install $(INSTALL_DIR) $$(1)/usr/lib/php $(INSTALL_BIN) $(PKG_BUILD_DIR)/modules/$(subst -,_,$(1)).so $$(1)/usr/lib/php/ $(INSTALL_DIR) $$(1)/etc/php5 ifeq ($(4),zend) echo "zend_extension=/usr/lib/php/$(subst -,_,$(1)).so" > $$(1)/etc/php5/$(subst -,_,$(1)).ini else echo "extension=$(subst -,_,$(1)).so" > $$(1)/etc/php5/$(subst -,_,$(1)).ini endif endef endef
Thanks for that Boken Lin
I assume I have to save the above code as a file?, of type? somewhere? on the device then run it with MAKE?
Sorry I really know nothing about Linux or OpenWtr
@Brian-Moreau You first need to set up a cross-compile environment:. Then you will need to go into the
feedsdirectory, and create a directory there. Inside the directory, you will put the Makefile. Then when you go to compile the package, you will use the
make menuconfigtool to select the
php5-peclpackage for compilation. From there, you will have the choice to build it directly into a firmware or build it as a separate package that you can then install on your Omega.
Hope this helps!
- Danny van der Sluijs
@Boken-Lin Thank you as your name is popping up all over the community with very good answers. I've tried to do as you suggested but not quite there yet. I'm trying to achieve the same here to install peel which seems to need the cross compile. I made it to the step make menuconfig and am able to set the initial options (Target system, subtarget, Target profile). Then it becomes unclear.
I've create the folder php5-pecl in the feeds directory. Inside I've created the Makefile.
But I can find the php5-pecl in the make menuconfig options.
Any tips or idea's ...?
@Danny-van-der-Sluijs Can you post the content of your
Makefile? I believe there's a configure in there that allows you to set which category you will be placing the package under.
Hi Sorry I am still totally lost...
I don't seem to be able to run the first command to setup the Cross-Compile Environment.
If I type ....
$ apt-get install -y subversion build-essential libncurses5-dev zlib1g-dev gawk flex quilt git-core unzip libssl-dev
I get $ not found, or if I just start command with apt-get .... I get apt-get not found error.
I have tried this from both the control panel command line interface and the serial interface.
What is the $ at the beginning of the commands and why is my device not recognising it?
Hi @Brian-Moreau, It seems that you are trying to setup the cross-compile environment on the Omega. You need to set it up on a Linux computer. The cross-compile environment is an environment that allows you to compile source code to Omega-specific binary. Since the Omega itself doesn't have a huge amount of computational resource, it is usually done on a desktop (or laptop) computer.
Do you know how to setup a virtual machine? We can walk you through the steps.
Hi again @Boken-Lin ,
I very much appreciate your help and assistance in this since it is not strictly an Omega problem.
Ok I am using WINXP PC and have in the past made dual boot so I should be able to manage to do that again but just wonder if you would recommend which virtual machine would be best for working with the Omega?
- Kit Bishop
@Brian-Moreau I have had a lot of success running a KUbuntu 14.04 VM in a VirtualBox VM under Windows
@Brian-Moreau Yeah, like @Kit-Bishop mentioned, one of the Ubuntu version is probably the easiest to get started. We've also got pre-compiled SDK for 64-bit Ubuntu.
Hi again @Broken-Lin
Ok I made some progress...
I installed KUbuntu on another computer.
I have followed steps 1 and 2 for setting up the Cross Compile Platform and all is good.
I now have the following prompt..
/openwrt t$
I am now at Step 3: Update Feeds but unsure what to do?
I typed cd feeds to go to the feeds directory but it says no such file or directory.
Thanks in advance
Brian
@Brian-Moreau Some small bits of clarification that may help:
- There is no directory feeds - feeds is a script file in the directory scripts under your openwrt directory
- The file feeds.conf.default that needs modifying referred to in Step 3 of the tutorial is in your openwrt directory
- The command to be run as described in Step 3 i.e.:
scripts/feeds update -a
should be run from your openwrt directory.
It runs the feeds script that is in the scripts directory.
- All other commands covered in the tutorial should similarly just be run from the openwrt directory
@Boken-Lin said:
Do you know how to setup a virtual machine? We can walk you through the steps.
This would be great been thinking about doing this while I wait on my second Omega. Could you be so kind as to post the steps for setting up VM?
@Rudy-Trujillo First you will need to get and install a copy of VirtualBox. This can be found at:
Then you will need an OS image to run on the VirtualBox - most commonly used for Omega work seems to be KUbuntu.
There are two ways to do this:
- The hard way: download an ISO of the KUbuntu version you want (these can be found here:). Then create a VM instance in VirtualBox that you install to from the ISO image - i.e. set the VM up to boot from the ISO image as a CD/DVD and follow the installation process. When that is complete, disconnect the VM from the ISO image and when you reboot the VM, you should be running the installed system.
- The easy way: Download a pre-installed VirtualBox VM image for the version you want. I would suggest the one that can be obtained from here:
Then just open it using VirtualBox and you will be running the pre-installed system.
@Rudy-Trujillo A PS to my previous message: some people prefer VMWare over VitualBox.
A free copy of VMWare can be found at:
The principle of VMWare is pretty much the same as VirtualBox
I am less familiar with the availability of pre-installed VMWare images and you may have to do a google search for one.
|
http://community.onion.io/topic/138/how-to-install-oauth-for-php
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
WCF RIA Services
By Microsoft Silverlight Team | September 1, 2010 | Level 300 : Intermediate
Summary. This QuickStart shows you how to use RIA Services to display data from a database, restrict data modifications to authenticated users, and update data.
This QuickStart contains the following sections:
This QuickStart uses the AdventureWorksLT sample database. To download the code and database for this QuickStart, see WCF RIA Services sample and AdventureWorksLT sample database.
Creating a RIA Services-Enabled Application
To use RIA Services in a Silverlight client, you must create a RIA Services link between the Silverlight project and the server project. You can either manually select the Enable WCF RIA Services check box when creating a Silverlight application, or you can create a project using the Silverlight Business Application template, which automatically includes a RIA Services link. This QuickStart uses the Silverlight Business Application template. The following image shows how to create a RIA Services-enabled application using the Silverlight Business Application template.
Creating the Data Model and Domain Service
You can represent your data using any type of data access layer, such as Entity Data Model, LINQ-to-SQL object model, CLR object, or a Web service. To use the Entity Data Model, you add a new item to the server project, and select the ADO.NET Entity Data Model item.
In the Entity Data Model wizard, you select the entities to include in the data model. In the following image, the Customer table from AdventureWorksLT is included.
Next, you add a domain service to expose a set of related operations in the data access layer in the form of a service layer. A domain service is a WCF service that encapsulates the business logic of an application.
After adding the data model, you must build the project. Building the project ensures that the entity classes are created and will be available when you create the domain service. In the server project, you add a new item and select Domain Service Class. The following image shows how to create a domain service named SalesDomainService.
When you add a domain service, the Add New Domain Service Class dialog box appears. In the Add New Domain Service Class dialog box, you select which entities to expose through the domain service, and which operations are permitted. The following image shows how to select the Customer entity and enable editing.
After adding the domain service, you must build the solution. When you build the solution, code is generated in the Silverlight application that you will use to call the domain service. The generated code is located in a hidden folder in the client project. You can view the hidden folder by selecting Show All Files in the client project. The generated code is in a folder named Generated_Code.
In the Generated_Code folder is a file with the extension .g.cs or .g.vb. This file contains the generated code you will call in the client project. The generated code contains a DomainContext class that represents the domain service, and methods that represent the domain operations. You can view this code, but should not modify it because your changes will be lost the next time the solution is built.
Displaying the Data
One way to display data is to use the following controls:
- DataGrid - to display the data in a table
- DataPager - to break the data into smaller pages
- DomainDataSource - to simplify the interaction between the user interface and the data
The DomainDataSource control enables you to specify several values for retrieving the data. For example, you can specify the name of the method to use for the query, the number of records to retrieve in each request, and how to sort, filter, or group the values. The following markup shows how to use these controls in the Silverlight project in the Views\Home.xaml file. These controls use the
GetCustomers method in the domain service, retrieve 20 records with each request, and sort by
CustomerID.
<riaControls:DomainDataSource <riaControls:DomainDataSource.DomainContext> <ds:SalesDomainContext /> </riaControls:DomainDataSource.DomainContext> <riaControls:DomainDataSource.SortDescriptors> <riaControls:SortDescriptor </riaControls:DomainDataSource.SortDescriptors> </riaControls:DomainDataSource> <sdk:DataGrid <sdk:DataPager
For the previous markup to work, you must have references to the System.Windows.Controls.Data and System.Windows.Controls.DomainServices assemblies. You must also have the following namespace declarations in the Page element. For Visual Basic, the
ds namespace prefix maps to
RIAExampleApp instead of
RIAExampleApp.Web.
XAML for C#
XAML for Visual Basic
When you run the application, the results are displayed in the DataGrid.
Customizing Data Access
When you expose an entity through a domain service and select the Enable editing check box, the RIA Services framework automatically creates query, insert, update, and delete methods. These methods are a starting point, but it is very important that you customize these methods to meet your security requirements. You should remove any domain operations that are not required because other application developers could access the domain operation in a way that you had not intended. You can restrict access to a domain operation by applying either the RequiresAuthentication or RequiresRole attribute to the method.
The following code shows the
InsertCustomer and
DeleteCustomer methods removed from the domain service (SalesDomainService.cs or SalesDomainService.vb) that was created in the server project. The code also shows the RequiresAuthentication attribute applied to the
UpdateCustomer method.
[EnableClientAccess()] public class SalesDomainService : LinqToEntitiesDomainService<AdventureWorksLT2008_DataEntities> { public IQueryable<Customer> GetCustomers() { return this.ObjectContext.Customers; } [RequiresAuthentication] public void UpdateCustomer(Customer currentCustomer) { this.ObjectContext.Customers.AttachAsModified(currentCustomer, this.ChangeSet.GetOriginal(currentCustomer)); } }
When you create a domain service, a metadata class is automatically created in the server project for any entities that are exposed by the domain service. The metadata class is named with the .metadata.cs (or .metadata.vb) extension. For example, a domain service named SalesDomainService has a corresponding file named SalesDomainService.metadata.cs and in that file are metadata classes for each entity that is exposed.
In the metadata class, you apply attributes that specify behavior and validation requirements for the entity. For example, you apply the Exclude attribute to properties that you do not want to expose in the application, and you apply the Required attribute to properties that users must provide a value for. When you build the solution, the Required attribute is automatically applied to properties of the entity that is generated in the client project. Therefore, the same validation rule is applied to both the client and server projects. The following code shows the metadata class (SalesDomainService.metadata.cs or SalesDomainService.metadata.vb) with the Exclude attribute applied to the
ModifiedDate,
NameStyle,
PasswordHash,
PasswordSalt, and
rowguid properties, and the Required attribute applied to the
Phone property.
internal sealed class CustomerMetadata { // Metadata classes are not meant to be instantiated. private CustomerMetadata() { } public string CompanyName { get; set; } public int CustomerID { get; set; } public string EmailAddress { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public string MiddleName { get; set; } [Exclude] public DateTime ModifiedDate { get; set; } [Exclude] public bool NameStyle { get; set; } [Exclude] public string PasswordHash { get; set; } [Exclude] public string PasswordSalt { get; set; } [Required] public string Phone { get; set; } [Exclude] public Guid rowguid { get; set; } public string SalesPerson { get; set; } public string Suffix { get; set; } public string Title { get; set; } }
Enabling Authenticated Users to Update Data
RIA Services supports authentication. For example, you can specify that only authenticated users can update the data. You can use the IsAuthenticated property to determine whether the current user is authenticated. You can use the LoggedIn and LoggedOut events to initialize values and updated the user interface when the user's authentication status changes. The following code shows how IsAuthenticated, LoggedIn, and LoggedOut can be used in Home.xaml.cs (or Home.xaml.vb). The C# code requires a using statement that specifies the
System.ServiceModel.DomainServices.Client.ApplicationServices namespace.
public Home() { InitializeComponent(); this.Title = ApplicationStrings.HomePageTitle; WebContext.Current.Authentication.LoggedIn += new System.EventHandler(Authentication_LoggedIn); WebContext.Current.Authentication.LoggedOut += new System.EventHandler(Authentication_LoggedOut); } protected override void OnNavigatedTo(NavigationEventArgs e) { if (WebContext.Current.User.IsAuthenticated) { CustomerDataGrid.IsReadOnly = false; } } void Authentication_LoggedIn(object sender, AuthenticationEventArgs e) { CustomerDataGrid.IsReadOnly = false; } void Authentication_LoggedOut(object sender, AuthenticationEventArgs e) { CustomerDataGrid.IsReadOnly = true; }
To submit changes made in the DataGrid, you need to add buttons and event handlers that will assist with processing changes. The following shows the XAML to add save and reject changes buttons in Home.xaml.
<StackPanel HorizontalAlignment="Left" Orientation="Horizontal"> <Button Content="Save Changes" Click="SaveButton_Click" IsEnabled="False" Margin="5" x:</Button> <Button Content="Reject Changes" Click="RejectButton_Click" IsEnabled="False" Margin="5" x:</Button> </StackPanel>
In the XAML element for the DataGrid control, you must add the name of the event handler for the RowEditEnded event.
In the XAML element for the DomainDataSource control, you must add the name of the event handler for the SubmittedChanges event.
To submit or reject changes, you use the SubmitChanges and RejectChanges methods on the domain data source. You can use the HasChanges property to determine whether there are any pending changes. The following shows the event handlers in Home.xaml.cs (or Home.xaml.vb) to process changes. The C# code requires a using statement that specifies the
System.Windows namespace.
private void SaveButton_Click(object sender, RoutedEventArgs e) { domainDataSource1.SubmitChanges(); } private void RejectButton_Click(object sender, RoutedEventArgs e) { domainDataSource1.RejectChanges(); CheckChanges(); } private void CheckChanges() { bool hasChanges = domainDataSource1.DomainContext.HasChanges; SaveButton.IsEnabled = hasChanges; RejectButton.IsEnabled = hasChanges; } private void CustomerDataGrid_RowEditEnded(object sender, DataGridRowEditEndedEventArgs e) { CheckChanges(); } private void domainDataSource1_SubmittedChanges(object sender, SubmittedChangesEventArgs e) { if (e.HasError) { MessageBox.Show(string.Format("Submit Failed: {0}", e.Error.Message)); e.MarkErrorAsHandled(); } CheckChanges(); }
When you run the application, you cannot edit the data until you log in. The Silverlight Business Application template automatically includes a window for logging in and registering users. To log in, you click the login link in the upper right corner.
To create new accounts for authentication, you must have SQL Server Express installed.
To create a new user, you click the Register now link and specify values for the new user.
After you log in, the data in the DataGrid is now editable. The
Phone property includes a Required attribute. If you try to delete the existing
Phone value, a required message is displayed. The validation rule is applied on the client and does not require a post back to the server.
Validation is automatically enforced for other fields based on the database definition. For example, the
FirstName field does not allow null. The field is automatically required if you try to delete the value.
When you click Save Changes, the changes are saved in the database. When you click Reject Changes, the original values are restored.
For more information about WCF RIA Services, see WCF RIA Services on MSDN.
See Also
- Walkthrough: Creating a RIA Services Solution
- Walkthrough: Using the Silverlight Business Application Template
By Microsoft Silverlight Team, Silverlight is a powerful development platform for creating engaging, interactive user experiences for Web, desktop, and mobile applications when online or offline.
|
https://msdn.microsoft.com/en-us/library/mt744386
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
Causes normal program termination to occur.
Several cleanup steps are performed:
atexitare called, in reverse order of registration
tmpfileare removed
exit_codeis zero or
EXIT_SUCCESS, an implementation-defined status, indicating successful termination is returned. If
exit_codeis
EXIT_FAILURE, an implementation-defined status, indicating unsuccessful termination is returned. In other cases implementation-defined status value is returned..
(none).
#include <stdio.h> #include <stdlib.h> int main(void) { FILE *fp = fopen("data.txt","r"); if (fp == NULL) { fprintf(stderr, "error opening file data.txt in function main()\n"); exit(1); } fclose(fp); printf("Normal Return\n"); }
Output:
error opening file data.txt in function main()
© cppreference.com
Licensed under the Creative Commons Attribution-ShareAlike Unported License v3.0.
|
http://docs.w3cub.com/c/program/exit/
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
Reduce Long Import Statements in React Native with Absolute Imports
In large React Native projects, it’s common to have long relative import paths like:
import MyComponent from '../../../screens/MyScreen/MyComponent'
With import paths that go up and down the folder hierarchy like that, it can be confusing to figure out which file or folder you’re actually importing, and it generally just looks messy to have several of those import statements at the top of your file.
Instead, we can convert relative import paths to absolute import paths by creating a new
package.json file at any level of the folder hierarchy. That
package.json file needs to specify a name that you want to call that folder:
{ "name": "screens" }
And then you can begin your import statements with that new module name:
import MyComponent from 'screens/MyScreen/MyComponent'
Note that this only works for React Native projects, and not other npm based projects like create-react-app web apps.
|
https://egghead.io/lessons/react-native-reduce-long-import-statements-in-react-native-with-absolute-imports?utm_source=rss&utm_medium=feed&utm_campaign=rss_feed
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
Hey ! You need to learn python . If you are looking a short tutorial which can give a good start in Python . So Please trust me this will be end of your search . Anyways Python is not only a scripting language which is used for data science . There are so many uses of python for example a big community of developer use python in multimedia Software development. In the same way , So many Developer developer use python in Web Application development. That is why we call Python is a multipurpose Language like Java.
Now are You ready For being hard core developer in Python .Of course , You are ! Right ? So you need to first install python . You can Download Python and Install it . There are two basic version of Python 2.x family and 3.x family . 2.x comes with great documentation and community support .In the flip of coin , 3.x is very fast . So It is completely Application dependent what you should choose .
I hope you have installed Python. You have to set the path into environment if you are using Windows based Operating system. Now Lets Learn Python –
Road map to learn Python essentials simple and describe below . After reading this article you will be knowing at least writing basic code in Python .
Now lets start briefing one by one-
1.Learn Python Data Type-
Python support five standard data type . Unlike other Programming language Python is quite flexible in data type . Here we use one root type for every sub data type For example In java for numbers we have to define the sub set Int , Float , double but in python we use Number as a data type . There are five data type respectively Number, String , Tuple , List , Dictionary .
Syntax-
var coin =5 # Number Declaration
var result =8.9 # Number Decleration
Str =’Data Science Learner’ # String Decleration
list_of_python = [1, ‘learn python ‘ , 4.5 ] # List Declaration
Tuple_of_python =(‘Rahul’, 4, 98) 3 #Tuple Declaration
Dictionart_of _Python = { ‘key1 ‘=’value1’ , ‘key2 ‘=’value2’}
Difference between List and Tuple in Python –
List and Tuple in python looks quite similar . In fact , There is quite similarity in List in Python and Tuple in Python . The only difference is that you can not update tuple while you can update a list in python .These two data type are similar as Array in C but unlike C there we can store different data type .
For Example-
Python_List= [ “Data Science Learner “, 3.4 ,3] # Here we are storing different data type in a Python List
2.Conditional Statement in Python –
Conditional Statement in Python is quite similar with conditional Statement in java and other traditional Programming Language .
if <condition is=”” true=””>:
———————————
———————————-
elif <condition2>:
———————————-
———————————-
else:
<do another=”” thing=””>:
———————————
There will not be any closing braces in python .There will be : symbol after every conditional statement in Python .
3. Loop in Python-
If you want to be a expert in python the only way is to learn the basics .
Syntax of for loop-
for var in range(limit source, limit destination):
Statement 1;
Statement 2;
4. Function in Python –
Function is used to modular the code . We have to use ‘def ‘ keyword in defining the function . Functions in python is consist of two parts.
- Defining a Function in Python
- Calling a Function in Python
Defining a Function in Python-
Here we write the behavior of the function . What does it going to perform and what does it going to return like that .
For Example-
def funname (parameters 1, parameters 2, …………… ):
statement 1
statement 2
—————————
return expression
Calling a Function in Python-
For calling a function in python we have just pass the values ( actual arguments ) and Semicolon ‘;’ is also required at end of caller statement .
For Example-
Funname(10 ,20);
5.Python Modules-
If you want to add any functionality . The required functionality is not in current module you have to just import that module in your current script .
Syntax-
import part_you_need from module_name # adding only a part into current namespace
import module_name * # Importing all into the current namespace from module
6. I/O Function in Python-
If you want to input a string using Python there are so many option . I am going to list two function for you guys :
- input
- raw_input
Syntax For input function in Python –
var = input(“Learn python using input function”)
var = raw_input(“Learn python using raw_input function”)
Last Notes –
I have not given a detailed tutorial in this article .This can only give you a start to know the essentials . After reading you can start coding in python . See I am telling you my own experience in coding , You cant learn any programming language by reading at once . You have to read first then write some code . Learn again fill your knowledge gap in that programming language .
Specially If you want to learn Python , You can not be rigid to a particular approach because it is open source language . Developers are contributing some thing everyday . So you should learn python basics only and for advance work in python always look for a better way . How much you write code your experience will help you in writing the similar task in less time next time . Learning a programming language is an on going process . Never Commit this process .
Few words for IDE –
In the continuous series of article our next post will give you a deep approach in learning python . If i discuss about IDE for python there are so many option. In which I selected PyDev with Eclipse. This IDE made my life simpler , I do not have to remember to much syntax .By just knowing few trick in eclipse you can save lot of time . As smart coder you should go for recommendation made by IDE .
For example you want to run a function of a module for this you just have to write the name of module and then ” .” . a=just after it press Ctrl and space together . ide will automatically show you all function inside that module . You have to just choose the function . You need not to remember the complete syntax of the function .
I hope you have enjoyed this short tutorial to Learn Python . There is another article for learning Python for Data Science if you have already good understanding of Python basics. In the Complete Overview of Learning Python for Data Analysis , you will know the complete overview of how to start programming data science in Python Language.
If you have any suggestion or improvement you can comment below . We love to write for you . If you want to get free Ebooks and other materials on python , Do not forget to subscribe us .
Join our list
Subscribe to our mailing list and get interesting stuff and updates to your email inbox.
|
https://www.datasciencelearner.com/learn-python-essentials/
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
Subtracts color
b from color
a. Each component is subtracted separately.
// magenta - blue = red var redColor : Color = Color.magenta - Color.blue;
using UnityEngine; using System.Collections;
public class ExampleClass : MonoBehaviour { public Color redColor = Color.magenta - Color.blue; }
Did you find this page useful? Please give it a rating:
|
https://docs.unity3d.com/ScriptReference/Color-operator_subtract.html
|
CC-MAIN-2018-26
|
en
|
refinedweb
|
At times, it is desirable to call two (or more) implementing methods through a single delegate. This becomes particularly important when handling events (discussed later in this chapter).
The goal is to have a single delegate that invokes more than one method. This is different from having a collection of delegates, each of which invokes a single method. In the previous example, the collection was used to order the various delegates. It was possible to add a single delegate to the collection more than once, and to use the collection to reorder the delegates to control their order of invocation.
With multicasting, you create a single delegate that will call multiple encapsulated methods. For example, when a button is pressed, you might want to take more than one action. You could implement this by giving the button a collection of delegates, but it is cleaner and easier to create a single multicast delegate.
Two delegates can be combined with the addition operator (+). The result is a new multicast delegate that invokes both of the original implementing methods. For example, assuming Writer and Logger are delegates, the following line will combine them and produce a new multicast delegate named myMulticastDelegate:
myMulticastDelegate = Writer + Logger;
You can add delegates to a multicast delegate using the plus-equals (+=) operator. This operator adds the delegate on the right side of the operator to the multicast delegate on the left. For example, assuming Transmitter and myMulticastDelegate are delegates, the following line adds Transmitter to myMulticastDelegate:
myMulticastDelegate += Transmitter;
To see how multicast delegates are created and used, let's walk through a complete example. In Example 12-3, you create a class called MyClassWithDelegate that defines a delegate that takes a string as a parameter and returns void:
public delegate void StringDelegate(string s);
You then define a class called MyImplementingClass that has three methods, all of which return void and take a string as a parameter: WriteString, LogString, and TransmitString. The first writes the string to standard output, the second simulates writing to a log file, and the third simulates transmitting the string across the Internet. You instantiate the delegates to invoke the appropriate methods:
Writer("String passed to Writer\n"); Logger("String passed to Logger\n"); Transmitter("String passed to Transmitter\n");
To see how to combine delegates, you create another delegate instance:
MyClassWithDelegate.StringDelegate myMulticastDelegate;
and assign to it the result of "adding" two existing delegates:
myMulticastDelegate = Writer + Logger;
You add to this delegate an additional delegate using the += operator:
myMulticastDelegate += Transmitter;
Finally, you selectively remove delegates using the -= operator:
DelegateCollector -= Logger;
namespace Programming_CSharp { using System; public class MyClassWithDelegate { // the delegate declaration public delegate void StringDelegate(string s); } public class MyImplementingClass { public static void WriteString(string s) { Console.WriteLine("Writing string {0}", s); } public static void LogString(string s) { Console.WriteLine("Logging string {0}", s); } public static void TransmitString(string s) { Console.WriteLine("Transmitting string {0}", s); } } public class Test { public static void Main( ) { // define three StringDelegate objects MyClassWithDelegate.StringDelegate Writer, Logger, Transmitter; // define another StringDelegate // to act as the multicast delegate MyClassWithDelegate.StringDelegate myMulticastDelegate; // Instantiate the first three delegates, // passing in methods to encapsulate Writer = new MyClassWithDelegate.StringDelegate( MyImplementingClass.WriteString); Logger = new MyClassWithDelegate.StringDelegate( MyImplementingClass.LogString); Transmitter = new MyClassWithDelegate.StringDelegate( MyImplementingClass.TransmitString); // Invoke the Writer delegate method Writer("String passed to Writer\n"); // Invoke the Logger delegate method Logger("String passed to Logger\n"); // Invoke the Transmitter delegate method Transmitter("String passed to Transmitter\n"); // Tell the user you are about to combine // two delegates into the multicast delegate Console.WriteLine( "myMulticastDelegate = Writer + Logger"); // combine the two delegates, the result is // assigned to myMulticast Delegate myMulticastDelegate = Writer + Logger; // Call the delegated methods, two methods // will be invoked myMulticastDelegate( "First string passed to Collector"); // Tell the user you are about to add // a third delegate to the multicast Console.WriteLine( "\nmyMulticastDelegate += Transmitter"); // add the third delegate myMulticastDelegate += Transmitter; // invoke the three delegated methods myMulticastDelegate( "Second string passed to Collector"); // tell the user you are about to remove // the logger delegate Console.WriteLine( "\nmyMulticastDelegate -= Logger"); // remove the logger delegate myMulticastDelegate -= Logger; // invoke the two remaining // delegated methods myMulticastDelegate( "Third string passed to Collector"); } } } Output: Writing string String passed to Writer Logging string String passed to Logger Transmitting string String passed to Transmitter myMulticastDelegate = Writer + Logger Writing string First string passed to Collector Logging string First string passed to Collector myMulticastDelegate += Transmitter Writing string Second string passed to Collector Logging string Second string passed to Collector Transmitting string Second string passed to Collector myMulticastDelegate -= Logger Writing string Third string passed to Collector Transmitting string Third string passed to Collector
In the Test portion of Example 12-3, the delegate instances are defined and the first three (Writer, Logger, and Transmitter) are invoked. The fourth delegate, myMulticastDelegate, is then assigned the combination of the first two and it is invoked, causing both delegated methods to be called. The third delegate is added, and when myMulticastDelegate is invoked, all three delegated methods are called. Finally, Logger is removed, and when myMulticastDelegate is invoked, only the two remaining methods are called.
The power of multicast delegates is best understood in terms of events, discussed in the next section. When an event such as a button press occurs, an associated multicast delegate can invoke a series of event handler methods that will respond to the event.
|
http://etutorials.org/Programming/Programming+C.Sharp/Part+I+The+C+Language/Chapter+12.+Delegates+and+Events/12.2+Multicasting/
|
CC-MAIN-2018-26
|
en
|
refinedweb
|
?
86 pull requests were merged this week. This ties for week with most merged pull requests. A week in September 2013 is the other record holder. To cope with the massively inflated queue, there were two roll-ups (not counted).
Breaking Changes
extern modis now written
extern crate.
- The big codegen compiler flags pull request I warned about last week indeed landed. Many
-Zoptions are now under
-C, and a lot of previously-bare flags (such as
--linker) are now also under
-C.
std::utilhas been removed.
swapand
replacenow live in
std::mem.
dois once again a reserved word.
extra::rational,
extra::bigint, and
extra::complexhave been moved into
libnumas part of the libextra dissolution.
- The borrow checker’s treatment of closures has been revamped. It fixes all known soundness issues with closures. Unfortunately, it also breaks some programs that used to compile.
- Channels have been rewritten to use the internally-upgradable design that was hashed out on the list. Rather than having a separate
SharedChan,
Chanis now cloneable.
- The
SeekAPI has changed a bit.
- The breaking changes in the first rollup are the removal of
ptr::offset,
ptr::mut_offset,
ptr::is_null, and
ptr::is_not_nullas free functions and the movement of
extra::hexand
extra::base64to
libserialize.
std::num::Orderablehas been removed.
std::ptrsaw some more cleanup, most notably every function ending in
_ptrhas had that suffix removed.
to_unsafe_ptrand
to_mut_unsafe_ptrhave also been removed.
Other Changes
- Process arguments and environment variables now use the
from_utf8_lossyfunction that was introduced last week, rather than failing on invalid utf8. Additionally, there are now
args_as_bytesand
env_as_bytesfunctions to get arguments and the environment raw.
- The makefiles have been refactored, and there is now a
make helpand
make tipsfor hints on how to use the build system.
- In yet another multi-thousand-line patch by eddyb,
ast_map::Pathno longer requires cloning, due to clever devilry.
- green task spawning was sped up by almost 5x.
- We now bundle and use compiler-rt for intrinsics rather than using the system libgcc. We still depend on libgcc for unwinding,
- The pidigits benchmark was made 20x faster by optimizing bigint.
New Contributors
- Bruno de Oliveira Abinader
- Eduard Bopp
- Edward Wang
- Jake Kerr
- Liigo Zhuang
- Matthijs van der Vleuten
- Peiyong Lin
- Tobias Bucher
- WebeWizard
Weekly Meeting
The weekly
meeting
discussed struct construction sugar, what to allow in statics, the crate
keyword, a
finally macro, and implicit trait bounds.
This Week in Servo
Servo is a web browser engine written in Rust and is one of the primary test cases for the Rust language.
This week, we landed 18 PRs.
Notable additions
- Bruno Abinader landed several DOM fixes, including #1648 and #1646
- Hyun June Kim landed initial
:hoversupport in #1633
- Keegan McAllister restored task failure handling in #1691
- Rui renamed the .rc files to .rs in the main Servo repository in #1617
- Simon Sapin made some updates to attribute selector namespaces in #1653 and #1661
- Lars Bergstrom began the removal of non-script-crate
@muts in preparation for a Rust upgrade in #1663
- Austin King added some
window.consolesupport in #1666
- Marek Šuppa landed a fix to our contributing document in #1649
- Patrick Walton made extensive optimizations to style sharing in #1644
New contributors
- Austin King (ozten)
- Marek Šuppa (mrshu)
Meetings
In this week’s meeting, we discussed our embedding plans, ACID2 status, improving the availability of E-Easy issues, and doing a Rust upgrade (we are more than one month behind Rust master).
Announcements, etc
There is simply too much happening in the community to keep track of! I recommend browsing the Rust subreddit for goings-on. Some notable ones:
- Rust By Example: HashMap
- State machines using phantom types
- golo-lang.org’s homepage design adapted to Rust. There is some discussion on reddit about this.
|
http://cmr.github.io/blog/2014/02/15/this-week-in-rust/
|
CC-MAIN-2018-26
|
en
|
refinedweb
|
1 /*2 * Copyright 2006 Pentaho Corporation. All rights reserved.3 * This software was developed by Pentaho Corporation and is provided under the terms4 * of the Mozilla Public License, Version 1.1, or any later version. You may not use5 * this file except in compliance with the license. If you need a copy of the license,6 * please go to. The Original Code is the Pentaho7 * BI Platform. The Initial Developer is Pentaho Corporation.8 *9 * Software distributed under the Mozilla Public License is distributed on an "AS IS"10 * basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. Please refer to11 * the license for the specific language governing your rights and limitations.12 *13 * @created Jul 08, 200614 * @author Thomas Morgner15 */16 package org.pentaho.plugin.jfreereport;17 18 import org.pentaho.core.connection.IPentahoResultSet;19 20 /**21 * Creation-Date: 08.07.2006, 13:19:4522 * 23 * @author Thomas Morgner24 * @deprecated This is an empty stub in case we have to maintain backward25 * compatiblity.26 */27 public class PentahoTableModel extends org.pentaho.plugin.jfreereport.helper.PentahoTableModel {28 private static final long serialVersionUID = 3946748761053175483L;29 30 public PentahoTableModel(IPentahoResultSet rs) {31 super(rs);32 }33 }34
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
|
http://kickjava.com/src/org/pentaho/plugin/jfreereport/PentahoTableModel.java.htm
|
CC-MAIN-2018-26
|
en
|
refinedweb
|
Pemi - Python Extract Modify Integrate
Project description
Welcome to Pemi’s documentation!
Pemi is a framework for building testable ETL processes and workflows. Users define pipes that define how to collect, transform, and deliver data. Pipes can be combined with other pipes to build out complex and modular data pipelines. Testing is a first-class feature of Pemi and comes with a testing API to allow for describing test coverage in a manner that is natural for data transformations.
Full documentation on readthedocs
Install Pemi
Pemi can be installed from pip:
pip install pemi
Concepts and Features
Pipes
The principal abstraction in Pemi is the Pipe. A pipe can be composed of Data Sources, Data Targets, and other Pipes. When a pipe is executed, it collects data form the data sources, manipulates that data, and loads the results into the data targets. For example, here’s a simple “Hello World” pipe. It takes a list of names in the form of a Pandas DataFrame and returns a Pandas DataFrame saying hello to each of them.
import pandas as pd import pemi from pemi.fields import * class HelloNamePipe(pemi.Pipe): # Override the constructor to configure the pipe def __init__(self): # Make sure to call the parent constructor super().__init__() # Add a data source to our pipe - a pandas dataframe called 'input' self.source( pemi.PdDataSubject, name='input', schema = pemi.Schema( name=StringField() ) ) # Add a data target to our pipe - a pandas dataframe called 'output' self.target( pemi.PdDataSubject, name='output' ) # All pipes must define a 'flow' method that is called to execute the pipe def flow(self): self.targets['output'].df = self.sources['input'].df.copy() self.targets['output'].df['salutation'] = self.sources['input'].df['name'].apply( lambda v: 'Hello ' + v )
To use the pipe, we have to create an instance of it:
pipe = HelloNamePipe()
and give some data to the source named “input”:
pipe.sources['input'].df = pd.DataFrame({ 'name': ['Buffy', 'Xander', 'Willow', 'Dawn'] })
The pipe performs the data transformation when the flow method is called:
pipe.flow()
The data target named “output” is then populated:
pipe.targets['output'].df
Data Subjects
Data Sources and Data Targets are both types of Data Subjects. A data subject is mostly just a reference to an object that can be used to manipulate data. In the [Pipes](#pipes) example above, we defined the data source called “input” as using the pemi.PdDataSubject class. This means that this data subject refers to a Pandas DataFrame object. Calling the df method on this data subject simply returns the Pandas DataFrame, which can be manipulated in all the ways that Pandas DataFrames can be manipulated.
Pemi supports 3 data subjects natively, but can easily be extended to support others. The 3 supported data subjects are
- pemi.PdDataSubject - Pandas DataFrames
- pemi.SaDataSubject - SQLAlchemy Engines
- pemi.SparkDataSubject - Apache Spark DataFrames
Schemas
A data subject can optionally be associated with a Schema. Schemas can be used to validate that the data object of the data subject conforms to the schema. This is useful when data is passed from the target of one pipe to the source of another because it ensures that downstream pipes get the data they are expecting.
For example, suppose we wanted to ensure that our data had fields called id and name. We would define a data subject like:
from pemi.fields import * ds = pemi.PdDataSubject( schema=pemi.Schema( id=IntegerField(), name=StringField() ) )
If we provide the data subject with a dataframe that does not have a field:
df = pd.DataFrame({ 'name': ['Buffy', 'Xander', 'Willow'] }) ds.df = df
Then an error will be raised when the schema is validated (which happens automatically when data is passed between pipes, as we’ll see below):
ds.validate_schema() #=> MissingFieldsError: DataFrame missing expected fields: {'id'}
We’ll also see later that defining a data subject with a schema also aids with writing tests. So while optional, defining data subjects with an associated schema is highly recommended.
Referencing data subjects in pipes
Data subjects are rarely defined outside the scope of a pipe as done in [Schemas](#schemas). Instead, they are usually defined in the constructor of a pipe as in [Pipes](#pipes). Two methods of the pemi.Pipe class are used to define data subjects: source and target. These methods allow one to specify the data subject class that the data subject will use, give it a name, assign a schema, and pass on any other arguments to the specific data subject class.
For example, if we were to define a pipe that was meant to use an Apache Spark dataframe as a source:
spark_session = ... class MyPipe(pemi.Pipe): def __init__(self): super().__init__() self.source( pemi.SparkDataSubject, name='my_spark_source', schema=pemi.Schema( id=IntegerField(), name=StringField() ), spark=spark_session )
When self.source is called, it builds the data subject from the options provided and puts it in a dictionary that is associated with the pipe. The spark data frame can then be accessed from within the flow method as:
def flow(self): self.sources['my_spark_source'].df
Types of Pipes
Most user pipes will typically inherit from the main pemi.Pipe class. However, the topology of the pipe can classify it according to how it might be used. While the following definitions can be bent in some ways, they are useful for describing the purpose of a given pipe.
- A Source Pipe is a pipe that is used to extract data from some external system and convert it into a Pemi data subject. This data subject is the target of the source pipe.
- A Target Pipe is a pipe that is used to take a data subject and convert it into a form that can be loaded into some external system. This data subject is the source of the target pipe.
- A Transformation Pipe is a pipe that takes one or more data sources, transforms them, and delivers one more target sources.
- A Job Pipe is a pipe that is self-contained and does not specify any source or target data subjects. Instead, it is usually composed of other pipes that are connected to each other.
Pipe Connections
A pipe can be composed of other pipes that are each connected to each other. These connections for a directed acyclic graph (DAG). When then connections between all pipes are executed, the pipes that form the nodes of the DAG are executed in the order specified by the DAG (in parallel, when possible – parallel execution is made possible under the hood via Dask graphs). The data objects referenced by the node pipes’ data subjects are passed between the pipes according.
As a minimal example showing how connections work, let’s define a dummy source pipe that just generates a Pandas dataframe with some data in it:
class MySourcePipe(pemi.Pipe): def __init__(self): super().__init__() self.target( pemi.PdDataSubject, name='main' ) def flow(self): self.targets['main'].df = pd.DataFrame({ 'id': [1,2,3], 'name': ['Buffy', 'Xander', 'Willow'] })
And a target pipe that just prints the “salutation” field:
class MyTargetPipe(pemi.Pipe): def __init__(self): super().__init__() self.source( pemi.PdDataSubject, name='main' ) def flow(self): for idx, row in self.sources['main'].df.iterrows(): print(row['salutation'])
Now we define a job pipe that will connect the dummy source pipe to our hello world pipe and connect that to our dummy target pipe:
class MyJob(pemi.Pipe): def __init__(self): super().__init__() self.pipe( name='my_source_pipe', pipe=MySourcePipe() ) self.connect('my_source_pipe', 'main').to('hello_pipe', 'input') self.pipe( name='hello_pipe', pipe=HelloNamePipe() ) self.connect('hello_pipe', 'output').to('my_target_pipe', 'main') self.pipe( name='my_target_pipe', pipe=MyTargetPipe() ) def flow(self): self.connections.flow()
In the flow method we call self.connections.flow(). This calls the flow method of each pipe defined in the connections graph and transfers data between them, in the order specified by the DAG.
The job pipe can be executed by calling its flow method:
MyJob().flow() # => Hello Buffy # => Hello Xander # => Hello Willow
Furthermore, if you’re running this in a Jupyter notebook, you can see a graph of the connections by running:
MyJob().connections.graph()
Referencing pipes in pipes
Referencing pipes within pipes works the same way as for data sources and targets. For example, if we wanted to run the MyJob job pipe and then look at the source of the “hello_pipe”:
job = MyJob() job.flow() job.pipes['hello_pipe'].sources['input'].df
Where to go from here
Full documentation on readthedocs
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/pemi/
|
CC-MAIN-2018-26
|
en
|
refinedweb
|
linear value of an sRGB color.
Colors are typically expressed in sRGB color space. This property returns "linearized" color value, i.e. with inverse of sRGB gamma curve applied.
var color : Color = Color (.3, .4, .6); print(color.linear);
using UnityEngine; using System.Collections;
public class ExampleClass : MonoBehaviour { public Color color = new Color(0.3F, 0.4F, 0.6F); void Example() { print(color.linear); } }
Did you find this page useful? Please give it a rating:
|
https://docs.unity3d.com/ScriptReference/Color-linear.html
|
CC-MAIN-2018-26
|
en
|
refinedweb
|
A lot of fun! However, is there anyway to reduce the amount of zedz that spawn?
Hi, take a look at the file "infected\popinfected.sqf".
lines 19 to 23 defines _probability of spawning, based on the house count of the zone.
in Aegia Marina filled at 10% I can get 30 to 70 spawned units ... :/
Hello, I found a little problem in the mission of test ... and then the zombies appear at the right time disappears from a certain area.
a solution?
thanks you
thank you for reporting this, it seems I forgot to put airfield and Aigia marina triggers on repeat mode (but not the one in gas station, it will be part of the next update).
Hello, when I set a waypoint marker on the map (shift+click) in game I will teleport to this location. How can I stop this?
the popinfected.sqf is missing..... Causing issues. also the dead zombie bodies are not disappearing.....
Stupid question coming in, but how do I install this?
Do I just do the same thing as I would with maps/units, or do I do something else?
Total comments : 7, displayed on page: 7
#include "infected\infectedsounds.hpp";
#include "infected\cfgfunctions.hpp";
null = this spawn INF_fnc_infecthim
null= [thistrigger,["marker01"],100, false, true] spawn INF_fnc_infectedzone;
null= [thistrigger,["pop000"],15,false] spawn INF_fnc_initHorde;!
|
http://www.armaholic.com/page.php?id=26438
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
A,
Please
Transfer from High concurrency is an important theme in JDK 1.6, HotSpot virtual machine team in this version a lot of energy spent to achieve a variety of locks optimization techniques such as adaptive spin (Ad
parameters, concurrency, efficiency, short period, period of time, execution time, threads, virtual machine, optimization techniques, synchronization, locks, thread implementation, mutual exclusion, processors, bias, kernel mode, competition issues, spin spinApril 27
win8 House () in win8 tips to help you improve windows channel optimization techniques: Raiders saving energy save money According to the information website Win7news reported recently in the energy conservation performance of the conc
management interface, optimization techniques, power consumption, energy consumption, unnecessary waste, saving energy, processor clock speed, channel optimization, hardware upgrade, power group, waste of energy, speed system, life management, network repeater, chip group, mindteck, compatible hardware, money saving tips, system temperature, tips windowsApril 20
win8 House () in win8 tips to help you improve windows channel optimization techniques: Windows7 operating system simply to talk about the wonders of the folder September 7, 2009, when sent over this "", the contents of which is m
quot quot, classmates, microsoft windows, optimization techniques, system directory, x windows, security mechanisms, system space, windows directory, system folder, explorer folder, unique properties, folder properties, system partition, microsoft security, channel optimization, specialized tools, directory purposes, winsxs folder, microsoft systemsApril 20
Article out to: Web Development Original Address: Does the length of domain name SEO Most of the short domain names, including the number of Chinese people like to have been robbed of registration, the so-called good domain name can only have more cr
google, perspective, domain names, point of view, creativity, domain name, optimization techniques, first choice, search engines, viscosity, health, search engine rankings, index results, seo optimization, mom, memory increase, stickiness, quality sitesJanuary 5
1. SELECT clause to avoid using "*" When you want listed in the SELECT clause of the COLUMN of all, the use of dynamic SQL column reference to '*' is a convenient way. Unfortunately, this is a very inefficient method of fact, ORACLE in the proce
lt, oracle, br, column names, optimization techniques, duplicate records, select count, parsing, data dictionary, processing time, emp, inefficient method, ename, salDecember 22
1. SELECT clause to avoid using "*" When you want listed in the SELECT clause of the COLUMN of all, the use of dynamic SQL column reference '*' is a convenient method. Unfortunately, this is a very inefficient way. In fact, ORACLE parsing proces
lt, oracle, br, column names, optimization techniques, duplicate records, select count, parsing, data dictionary, processing time, rowid, emp, ename, salDecember 22
Cloud computing comes to the birth, certainly can not do without a place, it is Google's data center and Google's data centers, not only has a huge number of server clusters, and high overall operating efficiency, PUE (Power Usage Effectiveness, the
google, optimization techniques, optimization technology, server clusters, server consolidation, server device, server equipment, auxiliary equipment, high temperatures, high temperature, center server, computing equipment, computing devices, precise temperature control, electricity efficiency, breakdown point, center computing, center administrators, consumption of electricity, trade rumorsDecember 20
ajax in the eyes of many programmers is a very complex or unfamiliar words, in fact, AX is not complicated, since the AJAX technology came out, have introduced the framework of hype, and made technical developers can not start, baidu google there are
lt, public string, google, ajax, programmers, new activexobject, highlight, td width, javascript code, printwriter, web pages, optimization techniques, page optimization, getwriter, static web, hype, java control, unfamiliar words, technical developers, professional requirementsDecember 11
Today, the entire operation of the database application performance is increasingly becoming a bottleneck, and this is particularly evident for Web applications. Performance on the database, it is not just a matter DBA need to worry about, and this i
group id, database table, select statement, table structure, sql statements, sql statement, web applications, primary key, programmers, performance bottlenecks, optimization techniques, performance tuning, data manipulation, application performance, query results, bottleneck, database applicationDecember 7
Today, the entire operation of the database application performance is increasingly becoming a bottleneck, and this is particularly evident for Web applications. Performance on the database, it is not just DBA need to worry about things, and this is
database table, table structure, sql statements, sql statement, web applications, primary key, performance optimization, performance bottlenecks, direct access, optimization techniques, data manipulation, application performance, query results, mysql query, bottleneck, sql functions, database application, mysql functions, mysql database engine, operation tablesNovember 30
When you optimize your program, to take into account many factors. Not only with optimization of the performance related to the way you use Papervision3D. Finally, talking about how the performance in the optimization of Flash. Let us not only to tes
removechild, performance optimization, development stage, optimization techniques, animation, low quality, high quality, chapter 13, screen quality, static view, quality performance, bitmap image, papervision3d, medium qualityNovember 25
Desing Pages Which will viewed by Search Engine Spiderbr> P> Cloaking is a technique that is used to display different pages to the search engine spiders than the ones normal visitors see. The usefulness of this ability results from the fact that go
optimization techniques, gt one, better solution, visual attractiveness, search engine optimization, search engines, traffic, ip address, search engine spiders, search engine spider, textual content, user agent string, doorway pages, christian louboutin, ability results, human visitorsNovember 21
Related articles: Oracle SQL optimization techniques efficient Oracle SQL statements Oracle Detailed statement optimization rules 53 (1) Recommended circle: Database circle more recommended 1. SELECT clause to avoid using "*" When you want liste
oracle, sql statements, oracle sql, related articles, column names, optimization techniques, duplicate records, select count, parsing, data dictionary, processing time, emp, sql optimization, ename, sal, optimization rulesNovember 17
Related articles: Oracle SQL optimization techniques efficient Oracle SQL statements 53 Oracle statement optimization rules explain (1) Recommended circle: Database circle more recommended 1. SELECT clause to avoid using "*" When you want listed
oracle, sql statements, oracle sql, related articles, column names, optimization techniques, duplicate records, select count, parsing, data dictionary, processing time, emp, inefficient method, sql optimization, ename, sal, optimization rulesNovember 17
1. SELECT clause to avoid using "*" When you want listed in the SELECT clause of the COLUMN of all, the use of dynamic SQL column reference '*' is a convenient way. Unfortunately, this is a very inefficient way. In fact, ORACLE parsing process ,
lt, implementation, oracle, oracle sql, circumstances, br, column names, optimization techniques, duplicate records, select count, parsing, data dictionary, processing time, rowid, emp, sql optimization, ename, sal, rollback segmentsNovember 8
When Baidu's algorithm change or strengthened, leading some sites rank for certain keywords disappeared, some administrators say that their site disappeared! Do you copy the contents because it punished? Such as the entire site using the same templat
baidu, key words, algorithm, contrary, decline, optimization techniques, internal changes, meta tags, robots, robot, site optimization, link structure, links point, excessive reliance, irregularities, case change, princip, text changes, expert analysis, trap filterNovember 5
Easier to make money at home Flash Player's memory management Assigned to the Flash Player Flash / Flex applications is relatively small majority of the memory, because too many small and frequent memory allocation would be more time-consuming activi
garbage collection, object reference, garbage collector, virtual machine, memory allocation, optimization techniques, system memory, memory management, refuse collection, recovery mechanism, chunk, true parameter, dictionary object, memory pool, money at home, memory flash, pool two, dictionary entries, friday morning, home flashNovember 4
Easier to make money at home In the "Flash / Flex application development, memory monitoring and optimization techniques," the article mentioned some of the content outlined here make a detailed explanation. 1, What is the garbage collector: Gar
object reference, two ways, regular expressions, array object, foo bar, automatic memory management, automatic garbage collection, garbage collector, application development, virtual machine, optimization techniques, sockets, detailed explanation, reference counting, free memory, second generation, money at home, formal model, friday morning, cleaNovember 4
Website optimization is divided into three levels, The first level of website optimization: fruit to optimize the program by learning Web site to enhance practical ability to grasp the basic web site optimization, there is no problem, if you build we
user experience, perspective, point of view, application development, optimization techniques, basic web, construction site, industry knowledge, web site optimization, meta tags, optimization web, structure content, search engine marketing, link structure, code knowledge, marketing strategy, level ability, search engine strategiesNovember 3
Baidu and Google search engine has two major flying into the homes of ordinary people, and we all know that Baidu and Google are the two competitors. They have in common is to give businesses a good platform for the display. When we do, Baidu, Google
quot, google, ordinary people, baidu, key words, optimization techniques, good business, business sense, search engine recognition, compulsory papers, keyword optionsNovember 1
Baidu and Google search engine has two major fly into the homes of ordinary people, and we all know that Baidu and Google are the two competitors. They have in common is to give businesses a good platform for the display. When we do Baidu, Google opt
google, ordinary people, baidu, key words, fly, optimization techniques, good business, business sense, search engine recognition, compulsory papers, keyword optionsNovember 1
(1) choose the most efficient sequence table name (only in the rule-based optimizer effectively): ORACLE parser in accordance with the order processing from right to left in the FROM clause table name, FROM clause written in the last table (base tabl
sql statements, database oracle, oracle sql, performance optimization, database access, maximum number, column names, optimization techniques, database queries, duplicate records, data dictionary, processing time, using oracle, bind variables, reading data, cross table, bottom up parsing, sql performance, crosstab, wkstOctober 14
(1) choose the most efficient order in the table name (only in the rule-based optimizer valid): ORACLE parser in accordance with the order processing from right to left in the FROM clause of the table name, FROM clause written in the final table (bas
sql statements, database oracle, oracle sql, parser, performance optimization, database access, maximum number, column names, optimization techniques, database queries, duplicate records, data dictionary, processing time, using oracle, bind variables, reading data, sql 1, cross table, bottom up parsing, wkstOctober 14
Google News Machines algorithm to automatically aggregate news content and well-known, although it is not perfect. But what is certain is that no matter what your type of news sites, once selected Google News news source, it will give you a very impr
principle, google, algorithm, six months, optimization techniques, optimization tips, news news, traffic, text description, news source, relevant text, article text, misinformation, structure layout, news sites, news content, news article, news link, radical changesSeptember 1
Website Keyword Optimization well in favor of Baidu GOOGLE and other search engines included, and to rank the top few search results page, you can increase website traffic, so webmasters have been in the pursuit of the best optimization techniques, a
lt, priority, preference, google, baidu, key words, optimization techniques, content description, risk, optimization tips, search engine, traffic, concise description, other search engines, meta description, writing skills, site optimization, title keywords, keyword search results, search results screenSeptember 1
Project development come to an end, take a breath, to sum up. 1 AJAX or AJAH * AJAX applications are in fact many of the classic space access using xmlhttp daemon, daemon script using eval callback returns or return the way to the development of simp
xml code, knowledge points, operational efficiency, google, ajax, xml file, development model, data structure, optimization techniques, cross browser, xiao, compact design, xml parsing, parsing xml, traditional approach, future trend, ying yong, jsp syntax, classic space, ajahAugust 15
HIBERNATE help documentation in accordance with this article, some network experience gathered from books and projects, only points and ideas that this could be a message of, or to find some more detailed and more specific information. Perhaps the fi
secondary cache, batch size, configuration parameters, database design, performance problems, system resources, cache management, optimization techniques, redundant data, control strategy, performance tuning, cache data, optimization tips, business types, transaction control, voice of experience, sql optimization, delay loading, design optimization, network experienceAugust 11
1. Try to avoid using DOM. When you need to repeatedly use the DOM, the first of the deposit to the JavaScript DOM reference to local variables in re-use. The method used to set innerHTML to replace document.createElement / appendChild () method. 2.e
lt, implementation, attributes, math, existence, efficiency, scope, array, bible, web program, constructor, life cycle, global variables, optimization techniques, dom reference, script engine, html fragments, global namespace, rare exception, settJuly 30
As more and more Flex is well known, more and more Internet applications have begun RIA. As we all know, the current domestic broadband applications is not as well developed in many developed countries, individual applications are basically 2M below
character data, performance optimization, flash player, loading quot, best practices, internet applications, application development, constraints, swf files, development programmers, optimization techniques, chinese character, fonts, stage 2, developed countries, quot loading, font support, character set restrictions, broadband applications, heiderJuly 30
In the design of programming languages, dynamic languages and static languages camps have considerable practical applications in particular, members of another. Like java c + + c # this heavyweight language, but Perl, python, ruby, etc. This is the l
variable name, ruby, dynamic languages, ddd, debugging, programming languages, optimization techniques, practical applications, write error, method parameter, collapse, python code, static languages, method signature, type declaration, python support, error line, spelling error, thing thing, rigorous theoretical analysisJuly 30
This article is a summary of Hibernate performance optimization techniques, divided into 13 pieces, each of the different aspects of a layman's introduction to seriously study and understand you stand to benefit the. Article is divided into 13 small
data query, persistent object, lt 2, lt 1, performance optimization, correlation, buffer cache, layman, speed and performance, optimization techniques, performance tuning, query results, processing time, data relationships, object initialization, persistent objects, level performance, tuning tips, small pieces, exercise cautionJuly 22
Windows XP optimization in various articles in an article on the QoS can be said is a long history, beginning from the WindowsXP began to spread when released, until now also appears in the frequently appears in various WindowsXP optimization article
priority, double click, windows xp, network bandwidth, optimization techniques, application programming interface, microsoft knowledge base, windowsxp, optimization technique, windows2000, commencement, packet scheduler, msc, connection speed, bandwidth limit, qos packet, link speed, request procedure, wire speed, computer terminalsJuly 10
With the rankings have good traffic. But the rankings and traffic are complementary, but whether or flow from the source in terms of ranking, the most important is the web site promotion and publicity efforts. But many stations are often counter-prod
beijing, optimization techniques, integrity, page content, black hat, uncertainty, web site optimization, black hats, resentment, original articles, mass mailing, web site promotion, major search engines, losses, bulk mail, taboo, innovations, disgust, publicity efforts, distrustJuly 6
This morning I come to see Bo 100 priority ranking in the BD and found this site ranking rose from ninth to eighth. 8, I like it! Later, a closer look, the original is ranked second in that forum back down. This is what I expected because most of the
priority, user experience, closer look, few days, bd, ten thousand, optimization techniques, search engines, second time, expo, optimizations, jiang li, hao yu, chen hao, contest participants, ip traffic, participating sites, competition results, soviet union, quality articles
In today's ever-changing web site optimization, single * place with a few key words have been repeated Website Optimization techniques to follow the past 08 Olympic Games. Standing on the technical frontier website optimization, the total of the Webs
quot quot, google, optimization techniques, query log, directory level, optimization technology, web site optimization, c url, mainstream search engines, single place, google yahoo, optimization strategy, internal web, search dogs, technology enthusiasts, web page html, link structure, search robots, frontier website, structure optimizationMay 26
http: jxd.cberm.gov.cnhtty: .16.13.168.2:9008 home222.214.175.192:8082 OFFICE: .cn orA5BNyhttp: nsdd.nsiy.com login.jsphttp: 121.22.85.252:81 scm index.htmhttp: t.cn R6CfWW0jmeter 64base
|
http://www.quweiji.com/tag/optimization-techniques/
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
appointments managed by a Scheduler control are assigned to some resources, then it is possible to group these appointments within the scheduling area. There are three available types of grouping: by date, by resources and the absence of grouping.
This example demonstrates how to group the Scheduler control's data by resources via the SchedulerControl.GroupType property. You can selectively do this, either at design time (via the XAML markup)...
<dxsch:SchedulerControl
...or at runtime (via code in the code-behind file).
using DevExpress.XtraScheduler;
// ...
schedulerControl1.GroupType = SchedulerGroupType.Resource;
Imports DevExpress.XtraScheduler
' ...
schedulerControl1.GroupType = SchedulerGroupType.Resource
|
https://documentation.devexpress.com/WPF/8918/Controls-and-Libraries/Scheduler-legacy/Examples/Initialization/How-to-Group-Appointments-by-Resources-legacy
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
AWS Security Blog to grant a developer the ability to create and manage permissions for an IAM role required to run an application on Amazon EC2. This ability is powerful and might be used inappropriately or accidentally to attach an administrator access policy to obtain full access to all resources in an account. Now, you can set a permissions boundary to control the maximum permissions employees can grant to the IAM principals (that is, users and roles) that they create and manage.
A permissions boundary is an advanced feature that allows you to limit the maximum permissions that a principal can have. Before we walk you through a specific example, here is an overview of how permissions boundaries work.. See the following diagram for a visual representation.
Figure 1: The intersection of permissions boundary and permissions policy
In this post, we’ll walk through an example that shows how to grant an employee permission to create IAM roles and assign permissions. We’ll also show how to ensure that these IAM roles can only access Amazon DynamoDB actions and resources in the AWS EU (Frankfurt) region. This solution requires the following steps.
IAM administrator tasks
- Define the permissions boundary by creating a customer-managed policy.
- Create and attach a permissions policy to allow an employee to create roles, but only with a permissions boundary and a name that meets a specific convention.
- Create and attach a permissions policy to allow an employee to pass this role to Amazon EC2.
Employee tasks
- Create a role with the required permissions boundary.
- Attach a permissions policy to the role.
Administrator step 1: Define the permissions boundary
As an IAM administrator, we’ll create a customer managed policy that grants permissions to put, update, and delete items on all DynamoDB tables in the AWS EU (Frankfurt) region. We’ll require employees to set this policy as the permissions boundary for the roles they create. To follow along, paste the following JSON policy in a file with the name DynamoDB_Boundary_Frankfurt_Text.json.
Next, use the create-policy AWS CLI command to create the policy, DynamoDB_Boundary_Frankfurt.
$aws iam create-policy --policy-name DynamoDB_Boundary_Frankfurt --policy-document
Note: You can also use an AWS managed policy as a permissions boundary.
Administrator step 2: Create and attach the permissions policy
Create a policy that grants permissions to create IAM roles with the DynamoDB_Boundary_Frankfurt permissions boundary, and a name that begins with the prefix MyTestApp. This policy also grants permissions to create policies with a specific namespace and update versions of these policies. This allows employees to modify the policies they use to grant permissions to roles they create, but not other policies in the account. The policy below also allows employees to attach IAM policies to roles with this boundary and naming convention. The permissions boundary controls the maximum permissions these roles can have. The naming convention enables administrators to more effectively grant access to manage and use these roles, without updating the employee’s permissions when they create a role. The naming convention also makes it easier to audit and identify roles created by an employee. To create this policy, paste the following JSON policy document in a file with the name Permissions_Policy_For_Employee_Text.json. Make sure to replace the variable <ACCOUNT NUMBER> with your own AWS account number. You can update the policy to grant additional permissions, such as launching EC2 instances in a specific subnet or allowing read-only access on items in a DynamoDB table.
Next, use the create-policy command to create the customer managed policy, Permissions_Policy_For_Employee, and use the attach-role-policy command to attach this policy to the principal, MyEmployeeRole, used by your employee.
$aws iam create-policy --policy-name Permissions_Policy_For_Employee --policy-document
$aws iam attach-role-policy --policy-arn arn:aws:iam::<ACCOUNT_NUMBER>:policy/Permissions_Policy_For_Employee --role-name MyEmployeeRole
Administrator step 3: Create and attach the permissions policy for granting permissions to pass the role
Create a policy to allow the employee to pass the roles they created to AWS services, such as Amazon EC2, enabling these services to assume the roles and perform actions on the employee’s behalf. To do this, paste the following JSON policy document in a file with the name Pass_Role_Policy_Text.json.
Then, use the create-policycreate-policy command to create the policy, Pass_Role_Policy, and the attach-role-policy command to attach this policy to the principal, MyEmployeeRole.
$aws iam create-policy --policy-name Pass_Role_Policy --policy-document
$aws iam attach-role-policy --policy-arn arn:aws:iam::<ACCOUNT_NUMBER>:policy/Pass_Role_Policy --role-name MyEmployeeRole
As the IAM administrator, we’ve successfully defined a permissions boundary. We’ve also granted our employee the ability to create IAM roles and attach permissions policies, while ensuring the permissions of the roles don’t exceed the boundary that we set.
Managing Permissions Boundaries
Changing and modifying a permissions boundary is a powerful permission. You should reserve this permission for full administrators in an account. You can do this by ensuring that policies you use as permissions boundaries don’t include the DeleteUserPermissionsBoundary and DeleteRolePermissionsBoundary actions. Or, if you allow “iam:*” actions, then you must explicitly deny those actions.
Employee step 1: Create a role by providing the permissions boundary
Your employee can now use the create-role command to create a new IAM role with the DynamoDB_Boundary_Frankfurt permissions boundary and the attach-role-policy command to attach permissions policies to this role.
For this post, we assume that your employee operates an application, MyTestApp, on Amazon EC2 that requires access to the Amazon DynamoDB table, MyTestApp_DDB_Table. The employee can paste the following JSON policy document and save it as Role_Trust_Policy_Text.json to define the trust policy.
Then, the employee can use the create-role command to create the IAM role, MyTestAppRole, and define the permissions boundary as DynamoDB_Boundary_Frankfurt. The create-role command will fail if the employee doesn’t provide the appropriate permissions boundary. Make sure to the <ACCOUNT NUMBER> variable is replaced with the employee’s in the policy below.
$aws iam create-role --role-name MyTestAppRole
--assume-role-policy-document
--permissions-boundary arn:aws:iam::<ACCOUNT_NUMBER>:policy/DynamoDB_Boundary_Frankfurt
Next, the employee grants permissions to this role by creating the permission policy using the attach-role-policy command to attach the following policy, MyTestApp_DDB_Permissions. This policy grants the ability to perform all actions on the DynamoDB table, MyTestApp_DDB_Table.
$aws iam create-policy --policy-name MyTestApp_DDB_Permissions --policy-document file:// MyTestApp_DDB_Permissions_Text.json
Although the employee granted full DynamoDB access, the effective permissions for this IAM role are the intersection of the permissions boundary, DynamoDB_Boundary_Frankfurt, and the permissions policy, MyTestApp_DDB_Permissions. This means the role only has access to put, update, and delete items on the MyTestApp_DDB_Table in the AWS EU (Frankfurt) region. See the following diagram for a visual representation.
Figure 2: Effective permissions for the IAM role
Summary
We demonstrated how to use permissions boundaries to delegate IAM permission management. Using permissions boundaries can help you scale permission management in your organization and move workloads to AWS faster. To learn more, see the IAM documentation for permissions boundaries.
If you have comments about this post, submit them in the Comments section below. If you have questions or suggestions, please start a new thread on the IAM forum.
Want more AWS Security news? Follow us on Twitter.
|
https://aws.amazon.com/blogs/security/delegate-permission-management-to-developers-using-iam-permissions-boundaries/
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
.
Observable methods vs. members
The examples of RxJava usage I’ve seen online generally use factory methods to produce
Observables consumed by their subscribers. For example, using Retrofit, we’d define an API call that returns an
Observable, and then later we could call this as a method on our generated service.
Observables in Retrofit example
Define a Retrofit API:
public interface GitHubService { @GET("users/{user}/repos") Observable<List<Repo>> listRepos(@Path("user") String user); }
Use it while chaining
Observables:
Retrofit retrofit = new Retrofit.Builder() .baseUrl("") .addCallAdapterFactory(RxJavaCallAdapterFactory.create()) .build(); GitHubService service = retrofit.create(GitHubService.class); service.listRepos("octocat") .subscribeOn(Schedulers.io()) .observeOn(AndroidSchedulers.mainThread()) .subscribe(repos -> ...)
This is not only very easy to understand, but pretty much the only way to do this because our
listRepos() method needs an argument. What if, however, we had some other method that takes no params? Could we then access an
Observable as a member on some other object instead of via one of its methods?
It is in fact very possible to do something like this:
A: Observable Member Implementation
public class Stopwatch { public final Observable<Long> currentTime = Observable.interval(1, TimeUnit.SECONDS); } ... // Using it: Stopwatch stopwatch = new Stopwatch(); stopwatch.currentTime.subscribe(...);
whereas a more traditional implementation would look like this:
B: Observable Method Implementation
public class Stopwatch { public Observable<Long> start() { return Observable.interval(1, TimeUnit.SECONDS); } } // Using it: Stopwatch stopwatch = new Stopwatch(); stopwatch.start().subscribe(...);
There are a few things about the previous that I think are worth pointing out:
- What if, in either implementation, we had marked the
Observableas
static? I’ll leave that for you to think about, but honestly I think that there aren’t any startling differences between static/non-static
Observables and static/non-static objects of other types.
- It’s interesting to note the semantic change between using a field and using a method. I tried to highlight this by naming them differently:
currentTime(a noun) versus
start(), a verb. Both
Observables are doing the same thing, but in the first case, we imply that
currentTimeis an observable property of a
Stopwatch, while in the second it is an operation that can be observed.
(Giving them the same name felt unnatural to me. I was taught that variable names should be nouns and method names should be verbs, and in fact in this old Sun Java code convention guide it is suggested thus for methods. Interestingly, however, they omit going so far as to suggest similarly giving variables noun names as well. Maybe we can think about this another time.)
- In the member implementation (A), the single instance variable
currentTimeis marked
final. I did this in order to provide functional equivalence to the second implementation (B), but are there cases in which an observable member ought to be mutable?
If the
Stopwatchclass had some other state
precisionthat controlled the interval’s
TimeUnit(instead of the hardcoded
TimeUnit.SECONDS), the member
currentTimewould need to be reassigned to reflect the new value, and thus could not be marked
final.
If the
currentTimewas no longer
final, it would be susceptible to arbitrary reassignment. How can we protect against this? In Java, we would mark it
private, and then provide getter access via a getter method, and then… wait. That just results in a modified version of B! Thus, if you know your
Observablemight need to change, you’re probably better off just using a method.
Kotlin computed properties are functions
Let’s get to what got me curious about in the first place. Say you have a class like
Stopwatch that has some state you’d like to observe. Additionally, you aren’t restricted by the need for a parameter like the GitHub username in our very first example. Should you be using the approach A with some observable property, or approach B with methods?
Computed properties…
Kotlin has a number of features that expand what is possible in Java. For the question posed above, I am particularly interested in what I’m going to call a computed property (name taken from the equivalent feature in Swift): a field on an object that does not actually store a value, but instead uses a custom getter (and optionally, a setter) to implement normal property access. Let’s illustrate this with an example:
class Person( var firstName: String, // Standard property with synthesized get-set accessors var lastName: String) { // Another standard property /** * Our computed property! Instead of storing a value, it computes one by calling the * getter we implement. * This is a read-only computed property defined using `val` instead of `var`. */ val fullName: String get() = "$firstName $lastName" }
Both computed and non-computed properties are accessed the same way:
val person = Person(firstName = "Monica", lastName = "Geller") person.firstName // "Monica" person.lastName // "Geller" person.fullName // "Monica Geller" person.lastName = "Bing" person.fullName // "Monica Bing"
Hopefully that makes sense. For another illustrative example, check out the section on computed properties in the Swift Programming Language Guide and its subsequent section on read-only computed properties here. For more on Kotlin properties, check out the appropriate Kotlin page.
…can be used like factory functions
Now, we don’t have to compute our property value from other object properties. Let’s look at a familiar example:
class Stopwatch() { val currentTime: Observable<Long> get() = Observable.interval(1, TimeUnit.SECONDS) }
Yes, it’s our
Stopwatch example A from before. Obviously, we can also implement the alternate implementation B in Kotlin as well:
class Stopwatch() { // In Kotlin, a function that returns an expression does not need to be // enclosed in braces or explicit use of the `return` keyword. fun start(): Observable<Long> = Observable.interval(1, TimeUnit.SECONDS) }
There is hardly any change to our Java code when it comes to actual usage, too:
val stopwatch = Stopwatch() // Version A stopwatch.currentTime.subscribe { ... } // Using Kotlin syntax for trailing lambdas // Version B stopwatch.start().subscribe { ... }
…are recalculated on each access
Notably, the read-only computed property approach does not have one of the same issues as the
public final field approach in Java. Earlier we noted that if an
Observable field depends on some kind of external state, the
public final approach is untenable because the field would need reassignment in order to generate a modified
Observable. Since a computed property uses its getter to generate a new value each time it is accessed, it is “reassigned” each time it is called.
Let’s look at an example:
class Stopwatch() { /** Some modifiable state that the Observables depend on */ var precision = TimeUnit.SECONDS // Version A val currentTime: Observable<Long> get() = Observable.interval(1, precision) // Version B fun start(): Observable<Long> = Observable.interval(1, precision) } // Usage: val stopwatch = Stopwatch() // Both return a subscription to `Observable.interval(1, TimeUnit.SECONDS)` stopwatch.currentTime.subscribe { ... } // Version A stopwatch.start().subscribe { ... } // Version B stopwatch.precision = TimeUnit.MINUTES // Now both return subscriptions to `Observable.interval(1, TimeUnit.MINUTES)` stopwatch.currentTime.subscribe { ... } // Version A stopwatch.start().subscribe { ... } // Version B
As you can see, both versions end up producing the same value. Furthermore, A is not subject to arbitrary replacement by some other
Observable – it is marked as read-only (
val) and cannot be reassigned.
…are in fact just functions?
Intuitively, it makes sense that a read-only computed property is just some syntactic sugar on top of methods. When you look at the property definition, the
get() = ... is literally defining a custom getter method to be used for property access. What does its declaration indicate?
val currentTime: Observable<Long>
We can read that as “if you access me on/give me an object instance, I will give you an
Observable<Long>.” In mathematical terminology, a relation that maps an input to an output is called a function.
Are all properties just functions inside? The conspiracy deepens.
…are indeed functions under the hood
Let’s check this out. It’s possible that there’s some subtle difference here that might contribute to our “property vs method” debate. Maybe there is some overhead in using computed properties… or will it be the other way around? This is the question that inspired this post, and we will find the answer!
In all seriousness, it’s pretty easy to find out. I propose a simple test: look at the code generated from each of the property and function implementations and see how both differ.
Example Kotlin class:
import rx.Observable import java.util.concurrent.TimeUnit class Stopwatch() { /** Some modifiable state that the Observables depend on */ var precision = TimeUnit.SECONDS // Version A val currentTime: Observable<Long> get() = Observable.interval(1, precision) // Version B fun start(): Observable<Long> = Observable.interval(1, precision) }
The Kotlin plugin to IntelliJ provides a nifty feature that lets you look at the Java bytecode generated from any given Kotlin code. You can access this easily by tapping Cmd+Shift+A and typing in “bytecode.”
Here is the generated bytecode for our Observables in the class above using Android Studio 2.2-Preview 4 and Kotlin v1.0.2-1:
Stopwatch.kt bytecode for
currentTime and
start()
// access flags 0x11 // signature ()Lrx/Observable<Ljava/lang/Long;>; // declaration: rx.Observable<java.lang.Long> getCurrentTime() public final getCurrentTime()Lrx/Observable; @Lorg/jetbrains/annotations/NotNull;() // invisible L0 LINENUMBER // access flags 0x11 // signature ()Lrx/Observable<Ljava/lang/Long;>; // declaration: rx.Observable<java.lang.Long> start() public final start()Lrx/Observable; @Lorg/jetbrains/annotations/NotNull;() // invisible L0 LINENUMBER
That’s a bit hard to parse, though the bytecode doesn’t look terribly different. Perhaps this will be clearer if we decompile this back into Java using the sweet new Decompile button on top here:
Stopwatch.decompiled.java
import java.util.concurrent.TimeUnit; import kotlin.Metadata; import kotlin.jvm.internal.Intrinsics; import org.jetbrains.annotations.NotNull; import rx.Observable; @Metadata( // Some generated metadata ) public final class Stopwatch { @NotNull private TimeUnit precision; @NotNull public final TimeUnit getPrecision() { return this.precision; } public final void setPrecision(@NotNull TimeUnit <set-?>) { Intrinsics.checkParameterIsNotNull(<set-?>, "<set-?>"); this.precision = <set-?>; } @NotNull public final Observable getCurrentTime() { Observable var10000 = Observable.interval(1L, this.precision); Intrinsics.checkExpressionValueIsNotNull(var10000, "Observable.interval(1, precision)"); return var10000; } @NotNull public final Observable start() { Observable var10000 = Observable.interval(1L, this.precision); Intrinsics.checkExpressionValueIsNotNull(var10000, "Observable.interval(1, precision)"); return var10000; } public Stopwatch() { this.precision = TimeUnit.SECONDS; } }
Surprise, surprise.
currentTime and
start(), and their generated counterparts
getCurrentTime() and
start(), have exactly equal implementations. They are the same except for their names! While we used a bit of a trivial example, I also went ahead and tried this with some much more complex
Observables and ended up with the same result.
Read-only computed properties are just functions under the hood.
So… which way do I use?
Let’s do a quick comparison.
Java
In pure Java, you are almost certainly better off using the method implementation of
Observable generation. There is a reason that you fairly exclusively see this practice in examples, and it all comes down to safe usage patterns through encapsulation.
As a reminder, exposing an
Observable field on an object:
- Allows unsafe assignment unless marked
final
- Make it unable to adapt to changes to its enclosing object’s other fields if marked
final
- Can’t take or use arguments.
Methods, on the other hand:
- Are immutable
- Generate their return values when called and thus react to changes in their enclosing objects’ other fields
- Can have arguments.
Kotlin
Kotlin’s property syntax allows a very different style of programming than when only using Java. As we’ve found, using a computed property gives us the power of a getter method with the access pattern of an object field.
With regard to data generation, computed properties:
- Do not allow unsafe reassignment
- Generate their return values when called and thus react to changes in their enclosing objects’ other fields
- Still can’t take arguments.
I submit that perhaps, then, computed properties are a solid alternative to methods for many data types. Certainly they have the capacity to clarify the intent of our programs. If your computed property is just a thin API that provides access to an in-memory field, you should definitely be using it instead of a method.
However…
Observables aren’t simple data types
Most
Observable objects are complex and expensive to create. If your computed property instantiates a new Observable every time it’s accessed, using a method better signals that we are building a new object that shouldn’t be used carelessly.
Some Observables have side effects
If the
Observable you expose as a property implements
doOnNext(), you have now introduced unknown side effects in what is supposed to be a simple data access.
Or, if you have assigned a
Scheduler to the
Observable, you may have just created a new thread, unbeknownst to the caller! When using a method, there is an implicit understanding that the code body might just do more than accessing a value.
Observables are inherently asynchronous
Asynchronous interactions are notoriously difficult to reason about. The
Observable class is meant to ease some of the cognitive load on the programmer, but it still remains that they represent some kind of potential asynchronous value. This leads back to the first point that Observables aren’t simple and that methods better reflect their complexity. In fact, many common
Observables represent long running operations, and actions an object takes are better represented as methods.
Note: A good resource I found on this subject is the MSDN guide on “Choosing Between Properties and Methods”.
TL;DR: Kotlin computed properties are just methods, but that doesn’t mean you should be using them to expose your object’s
Observable API.
Now it’s time for me to start migrating my computed Observables to straight methods… :-P
Share this post
Google+
StumbleUpon
|
https://itsronald.com/blog/2016/06/kotlin-get-property-vs-method/
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
AWS Developer Blog deserialize the enums, storing them either as string or numeric representations. With this change, you can use enums directly, without having to implement a custom converter. The following two code samples show an example of this:
Definitions:
[DynamoDBTable("Books")] public class Book { [DynamoDBHashKey] public string Title { get; set; } public List Authors { get; set; } public EditionTypes Editions { get; set; } } [Flags] public enum EditionTypes { None = 0, Paperback = 1, Hardcover = 2, Digital = 4, }
Using enums:
var client = new AmazonDynamoDBClient(); DynamoDBContext context = new DynamoDBContext(client); // Store item Book book = new Book { Title = "Cryptonomicon", Authors = new List { "Neal Stephenson" }, Editions = EditionTypes.Paperback | EditionTypes.Digital }; context.Save(book); // Get item book = context.Load("Cryptonomicon"); Console.WriteLine("Title = {0}", book.Title); Console.WriteLine("Authors = {0}", string.Join(", ", book.Authors)); Console.WriteLine("Editions = {0}", book.Editions);
Custom Converters
With OPM enum support, enums are stored as their numeric representations in DynamoDB. (The default underlying type is
int, but you can change it, as described in this MSDN article.) If you were previously working with enums by using a custom converter, you may now be able to remove it and use this new support, depending on how your converter was implemented:
- If your converter stored the enum into its corresponding numeric value, this is the same logic we use, so you can remove it.
- If your converter turned the enum into a string (if you use
ToStringand
Parse), you can discontinue the use of a custom converter as long as you do this for all of the clients. This feature is able to convert strings to enums when reading data from DynamoDB, but will always save an enum as its numeric representation. This means that if you load an item with a "string" enum, and then save it to DynamoDB, the enum will now be "numeric." As long as all clients are updated to use the latest SDK, the transition should be seamless.
- If your converter worked with strings and you depend on them elsewhere (for example, queries or scans that depend on the string representation), continue to use your current converter.
Enum changes
Finally, it’s important to keep in mind the fact that enums are stored as their numeric representations because updates to the enum can create problems with existing data and code. If you modify an enum in version B of an application, but have version A data or clients, it’s possible some of your clients may not be able to properly handle the newer version of the enum values. Even something as simple as reorganizing the enum values can lead to some very hard-to-identify bugs. This MSDN blog post provides some very good advice to keep in mind when designing an enum.
|
https://aws.amazon.com/blogs/developer/dynamodb-datamodel-enum-support/
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.