text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
|---|---|---|---|---|
XML Schema Validation with JAXP 1.4 and JDK 6.0
A few people have found problems validating DOM instances with JAXP1.4/JDK 6.0. I saw this quesion raised in the Java Technology and XML forum, and at least 3 bugs were filed for this in the last few weeks. I'll use this blog entry to explain what the problem is and how to easily fix your code.
Let's start by showing a snippet of the problematic code, which basically parses an XML file into a DOM and tries to validate it against an XML schema.
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); DocumentBuilder parser = dbf.newDocumentBuilder(); Document document = parser.parse(getClass().getResourceAsStream("test.xml")); // create a SchemaFactory capable of understanding XML schemas SchemaFactory factory = SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI); // load an XML schema Source schemaFile = new StreamSource(getClass().getResourceAsStream("test.xsd")); Schema schema = factory.newSchema(schemaFile); // create a Validator instance and validate document Validator validator = schema.newValidator(); validator.validate(new DOMSource(document));
Can you spot any problems in this code? Well, there's nothing obviously wrong with it, except that if you try this with JAXP 1.4 RI or JDK 6.0, you're going to get an error like,
org.xml.sax.SAXParseException: cvc-elt.1: Cannot find the declaration of element 'foo'. at ...
So what is the problem then? Namespace processing. By default, and for historical reasons, namespace awareness isn't enabled in a DocumentBuilderFactory, which means the document that is passed to the validator object isn't properly constructed. You can fix this problem by adding just one line,
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); dbf.setNamespaceAware(true); // Must enable namespace processing!! DocumentBuilder parser = dbf.newDocumentBuilder(); ...
By now you may be asking yourself why is this even reported as a problem. Naturally, XML Schema validation requires namespace processing. It turns out that in JDK 5.0 this magically worked in many cases. Notice the use of the term "magically". By that I mean, "we don't know how it worked before but we are certainly looking into it".
So for now I'm just reporting the problem and proposing a fix for it. But I still owe you a better explanation as to why this worked before --or maybe you know why and you can tell me?
- Login or register to post comments
- Printer-friendly version
- spericas's blog
- 6179 reads
|
https://weblogs.java.net/blog/spericas/archive/2007/03/xml_schema_vali.html
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
Details
- Type:
Bug
- Status: Closed
- Priority:
Major
- Resolution: Fixed
- Affects Version/s: 0.5
-
- Component/s: Compiler (General)
- Labels:None
- Patch Info:Patch Available
Description
The patch from
THRIFT-857, applied as r987565, didn't work for smalltalk. (sorry, I tested it with python, and didn't look too hard at the smalltalk)
However, there's a bigger problem that the smalltalk generator is calling back into program_ to lookup namespaces using "smalltalk.<foo>" while it's registered as a generator for "st" so the root namespace check only allows "st.foo" namespaces. I really hate this, but if there are people expecting "namespace smalltalk" to work, maybe the best fix would be to hard-code the converstion to "st":
(note: I ran the smalltalk generator this time to make sure the "category" line was there, but would in no way claim to have tested the smalltalk.)
Activity
- All
- Work Log
- History
- Activity
- Transitions
I just committed this. I added a warning message to our special case indicating that it was deprecated.
Thanks for the patch, Bruce!
|
https://issues.apache.org/jira/browse/THRIFT-877?focusedCommentId=12904254&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
PassportAuthenticationModule Class
Provides a wrapper around PassportAuthentication services. This class cannot be inherited.
For a list of all members of this type, see PassportAuthenticationModule Members.
System.Object
System.Web.Security.PassportAuthenticationModule
[Visual Basic] NotInheritable Public Class PassportAuthenticationModule Implements IHttpModule [C#] public sealed class PassportAuthenticationModule : IHttpModule [C++] public __gc __sealed class PassportAuthenticationModule : public IHttpModule [JScript] public class PassportAuthenticationModule implements IHttpModule
Thread Safety
Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe.
Remarks
You must install the Passport SDK in order to use the .NET Passport classes and methods. Passport SDK version 1.4 is supported but not recommended. Passport SDK version 2.0 is both supported and recommended.
CAUTION Only Passport SDK version 2.0 is supported under Windows XP.
The PassportAuthentication_OnAuthenticate event is raised for applications that are designed to attach a custom IPrincipal object to the context. The Passport service itself does the authentication, so that cannot be overridden.
Requirements
Namespace: System.Web.Security
Platforms: Windows 2000, Windows XP Professional, Windows Server 2003 family
Assembly: System.Web (in System.Web.dll)
See Also
PassportAuthenticationModule Members | System.Web.Security Namespace
|
https://msdn.microsoft.com/en-us/library/system.web.security.passportauthenticationmodule(v=vs.71).aspx
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
WaitHandle.WaitAll Method (WaitHandle[], Int32, Boolean) (duplicates).
-.Boolean
true when every element in waitHandles has received a signal; otherwise, false.
If millisecondsTimeout is zero, the method does not block. It tests the state of the wait handles and returns immediately.
AbandonedMutexException is new in the .NET Framework version 2.0. In previous versions,.
Notes on Exiting the Context
The exitContext parameter has no effect unless the WaitAllAll method. The thread returns to the original nondefault context after the call to the WaitAll method completes.
This can be useful when the context-bound class has the SynchronizationAttribute attribute. In that case, all calls to members of the class are automatically synchronized, and the synchronization domain is the entire body of code for the class. If code in the call stack of a member calls the WaitAll method and specifies true for exitContext, the thread exits the synchronization domain, allowing a thread that is blocked on a call to any member of the object to proceed. When the WaitAll method returns, the thread that made the call must wait to reenter the synchronization domain.
The following code example shows how to use the thread pool to asynchronously create and write to a group of files. Each write operation is queued as a work item and signals when it is finished. The main thread waits for all the items to signal and then exits.
using System; using System.IO; using System.Security.Permissions; using System.Threading; class Test { static void Main() { const int numberOfFiles = 5; string dirName = @"C:\TestTest"; string fileName; byte[] byteArray; Random randomGenerator = new Random(); ManualResetEvent[] manualEvents = new ManualResetEvent[numberOfFiles]; State stateInfo; if(!Directory.Exists(dirName)) { Directory.CreateDirectory(dirName); } // Queue the work items that create and write to the files. for(int i = 0; i < numberOfFiles; i++) { fileName = string.Concat( dirName, @"\Test", i.ToString(), ".dat"); // Create random data to write to the file. byteArray = new byte[1000000]; randomGenerator.NextBytes(byteArray); manualEvents[i] = new ManualResetEvent(false); stateInfo = new State(fileName, byteArray, manualEvents[i]); ThreadPool.QueueUserWorkItem(new WaitCallback( Writer.WriteToFile), stateInfo); } // Since ThreadPool threads are background threads, // wait for the work items to signal before exiting. if(WaitHandle.WaitAll(manualEvents, 5000, false)) { Console.WriteLine("Files written - main exiting."); } else { // The wait operation times out. Console.WriteLine("Error writing files - main exiting."); } } } // Maintain state to pass to WriteToFile. class State { public string fileName; public byte[] byteArray; public ManualResetEvent manualEvent; public State(string fileName, byte[] byteArray, ManualResetEvent manualEvent) { this.fileName = fileName; this.byteArray = byteArray; this.manualEvent = manualEvent; } } class Writer { static int workItemCount = 0; Writer() {} public static void WriteToFile(object state) { int workItemNumber = workItemCount; Interlocked.Increment(ref workItemCount); Console.WriteLine("Starting work item {0}.", workItemNumber.ToString()); State stateInfo = (State)state; FileStream fileWriter = null; // Create and write to the file. try { fileWriter = new FileStream( stateInfo.fileName, FileMode.Create); fileWriter.Write(stateInfo.byteArray, 0, stateInfo.byteArray.Length); } finally { if(fileWriter != null) { fileWriter.Close(); } // Signal Main that the work item has finished. Console.WriteLine("Ending work item {0}.", workItemNumber.ToString()); stateInfo.manualEvent.Set(); } } }
|
https://technet.microsoft.com/en-us/library/27y100eh.aspx
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
Interlanguage links
If you need an interlanguage link added to this page, please add a note here on the Discussion page, and the link will be added ASAP. Thanks. -- dria 11:46, 8 September 2005 (PDT)
Search plugins
Add to topics link to SearchPugins, for completeness.
Recent changes section usefulness?
The recent changes list is useless. It looked like this just a few minutes ago:
* Talk:Mozilla Developer Center Contents * Special:Log/protect * Category talk:All Categories * Talk:XSLT * XSLT
Could anyone explain me who's interested in such a list? I mean, it's useless for devmo contributors, who need to track the real Recent changes page anyways (it has explanations, and there are quite a lot of changes happening to the wiki). And it's useless for regular visitors - in the above example there are three 'talk' pages and one 'special' page of five. For someone who isn't familiar with wikis, this looks like a junk, and even if we filter the list to show only changes in the main namespace, five recent changes is few.
--Nickolay 12:10, 8 September 2005 (PDT)
|
https://developer.mozilla.org/en-US/docs/Talk:Mozilla_Developer_Center_contents$revision/143717
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
Search: Search took 0.02 seconds.
- 8 Jun 2008 9:19 PM
Yes, i know but i can't get it ! Can you help me ?
var bd = Ext.getBody();
var dialog = new Ext.ux.UploadDialog.Dialog({
// ******
height: 200,
width:...
- 28 May 2008 3:57 PM
Is there a way to config the extension to see the UploadDialog as a panel and not as a window ?
See you,
Fernando.
- 19 May 2008 1:24 PM
Hi, i'm trying to use your extension that looks very pretty but i have no luck with it. I'm not getting any information from the server. I read all the thread so, i know that there is implemented...
- 19 Oct 2007 2:22 PM
Yes, i realase that is a timing problem so i 'll have to analize how to use defer because i never used it.
The example starts to work after the first pass, obviously because images are already load,...
- 17 Oct 2007 5:29 AM
I try with your change, i get near what i want but for some reason the images get a strange effect, their seems to blink severeal times before change.
index.js
Ext.onReady(function() {
...
- 17 Oct 2007 5:26 AM
I tried this before but with no success.
- 16 Oct 2007 9:21 PM
Well, i code a script that get some photos in an array from a directory and then show the photos in a bucle one each time before some time. The problem is that the effect appear before the new image...
- 30 Aug 2007 11:20 PM
Hi, i get it. Now is working, i put the code and i comment it tomorrow because i'm too tired :D
Ext.onReady(function() {
var oParams = {
action: "verImagenes",
imagen: 1,
...
- 30 Aug 2007 7:21 PM
I tried your code, but i received "syntaxis error ()" in the FireBug.
- 30 Aug 2007 6:16 AM
Hi, i tried your code but it doesn't work. It still the same problem here. The params sended to the server are always the same and i want to be diferent in each request. In other words, i need to...
- 29 Aug 2007 3:17 PM
I think you don't understand what i'm trying to do. Let see what i send and what i really want to send.
Firebug (What i have)
[CODE]
1
- 29 Aug 2007 1:57 PM
Hi, i'm trying to use UpdateManager to update a div in my page. I want to send a value to the server, then the server respond with another value and i want to use this value and send it again to the...
- 27 Aug 2007 3:56 PM
Yes, i understand this. But is this the only way ? I don't want to rename my javascipts files to php. It seems awful, of course, perhaps it is the only way. I'm just asking :-)
Thanks
- 27 Aug 2007 12:18 PM
Hola gente, la verdad que por lo que estuve viendo no se puede hacer lo que me sugirieron:
var params = Ext.urlEncode({"imageId" : imgId, "num": <?=$_GET['number']?>});
A alguien se le...
- 22 Aug 2007 5:03 PM
Is correct to use php tags inside a javascript ?
var params = Ext.urlEncode({"imageId" : imgId, "num": <?=$_GET['number']?>});
Perhaps it is and the problem is that i don't like it yoo...
- 21 Aug 2007 4:00 PM
Thanks for all guys. Now is working but i still have some doubts.
example.php
<?php
$id = $_POST["imageId"];
if ($id == 1) {
$myData = array(success => false, msg => 'No se encontro la...
- 21 Aug 2007 7:53 AM
Guys, thank you a lot for your help. I was learning a lot.
Nullity: i tried your code and is working great.
BernardChhun: now i'm trying your code. despite the code of Nullity is working, i...
- 21 Aug 2007 6:04 AM
Hi, i want to do something but i feel a little bit lost. For example, i have a page with 2 divs. In one div, there are thumbnails of images, the other div is empty until i click on a thumbnail and...
- 18 Aug 2007 3:37 AM
- Replies
- 161
- Views
- 129,153
Hi, the extension is GREAT !! Just what i'm looking for. Could you put the php files that use in the demo to understand all the process please?
Thanks.
- 18 Aug 2007 12:25 AM
auwful !! With no documentation using namespaces is too pane. Other thing is that i can't get the profile permanent. The option to make it permanent is not appearing.
Still using SpKet despite i...
- 17 Aug 2007 7:40 PM
Well, i decide to download and install AptanaIDE standalone, because as a eclipse plugin installation was bugging me. Now the code completion of ExtJs is working, but i don
- 17 Aug 2007 12:28 AM
When i open an html file with my version of eclipse in the aptana perspective, the Code Assist Profile doesn't get the js files that are include in the html. Why ? I'm using aptana as a plugin.
- 15 Aug 2007 4:29 PM
Ok, that
- 9 Aug 2007 2:02 PM
Can anyone help me with code completion in eclipse with aptana plugin ?? I can't get it work !!
Thanks.
- 9 Aug 2007 11:34 AM
- Replies
- 27
- Views
- 47,444
Can you put the source code to download ?
Thanks.
Results 1 to 25 of 26
|
https://www.sencha.com/forum/search.php?s=f74c2319f4440038c83f9213b736babb&searchid=12174059
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
How to access desktopcouch contacts from outside evolution?
I'd like to be able to query my desktopcouch contacts list from mutt or emacs; this is because, while I use evolution on my full-powered system, on my small bring-it-everywhere laptop I run a minimal X session and emacs. I'd like to have my contacts available on both, though. Can you guys point me to a resource somewhere that documents how i could perform a query on the db externally, say from bash or python; i could then hook into mutt or emacs using their own interfaces.
thanks, i'm really looking forward to desktopcouch!
matt
Question information
- Language:
- English Edit question
- Status:
- Solved
- Assignee:
- Stuart Langridge Edit question
- Solved by:
- Matt Price
- Solved:
- 2009-10-15
- Last query:
- 2009-10-15
- Last reply:
- 2009-10-15
- Whiteboard:
- Stuart, Can you answer this question for Matt? Thanks!
Matt: those docs should be on your machine as well as /usr/share/
Nicola and Stuart, thanks for the pointers. Looking at the
seem entirely straightforward to query the database, say, looking for a
name or an email. so, for instance, i have this couchdb record in my
contacts database (ugly text copied from the html view):
_id "pas-id-
description
address
if i know this in advance i can access the individual fields like this:
>>> db = CouchDatabase(
>>> fetched = db.get_
>>> print fetched[
but what if i want to search for people named 'me', or addresses
containing "gmail"? Will i need to create a design document, then write
a view, and a function that iterates overthe rows returned?
And do the map/reduce functions need to be written in
javascript?
by evolution -- that i can query directly from python? (i noticed the
web interface doesn't think there are any permanent design docs in the
contacts db - i'm wondering also if that's one reason why email-address
completion is currently so slow in evolution).
thanks much for your help. i'll keep mining the web for more info, but
any further assistance you guys can give is much appreciated.
Matt
hey sorry for the lousy formatting on that last response, copied and
pasted from evolution after my last email was rejected...
Yep; the way to query the database is by writing a view to do so. In your example, your view would look like:
function(doc) {
emit(
}
and then you access that view by name
result = db.execute_
print result["me"]
On Thu, 2009-10-15 at 18:06 +0000, Stuart Langridge wrote:
> Your question #85917 on desktopcouch changed:
> https:/
>
>"]
>
since you're generous enough to answer,
this works great, thusly:
>>> map_js = "function(doc) {emit (doc.last_
>>> db.add_
>>> result=
>>> print result[
[<Row id='pas-
'me', 'last_name': 'gmail', '_rev':
'1-74cb956983d1
'http://
'_id': 'pas-id-
{'eed38e30-
'address': '<email address hidden>'}}}>]
print result just gave me the object description:
<ViewResults <PermanentView
'_design/
since what i really want is a completion function, i think i need to
return something like a list of dictionaries, or maybe just of lists:
[['Matt Price', '<email address hidden>'],['Matt
Price','<email address hidden>
What do you think would be the best way to get this information? i
thought from the way you responded last time that there's an efficient
way of grabbing the data directly from couchdb, rather than getting a
big old python object and then massaging it slowly in my python code.
thanks for being patient with me. it's fun to learn this stuff.
matt
--
Matt Price
>"]
getting closer to understanding this. if i do this:
result=
then result.rows returns a tuple of row objects that look like this:
<Row id='pas-
but if instead i define a very simple mapping function:
map_js = "function(doc) {emit (doc.last_
db.add_
result=
then the tuple returned looks like this:
<Row id='pas-
and the key is now a useful value. nonetheless i can't just execute
print result["me"]
and get the values i'm looking for. instead i have a somewhat
ugly-feeling selection logic:
for x in result:
if x.key="me":
for y in x.value[
print x.value[
this seems a little clumsy to me and i'm wondering whether i'm missing a
class somewhere that would make my code less convoluted and ultimately
more reusable by other people.
it seems a bit odd to keep posting to this question, and i know you're
all busy with the release coming up; so just ignore this if it's the
wrong forum.
thanks for all the help,
matt
I'm returning to this after a while and I have some very simple code
that works OK for a query on last name:
[oops, hit ctrl-return]
here's the code:
!/usr/bin/python
import sys
from desktopcouch.
from desktopcouch.
# define the search string
searchString=""
if len(sys.argv) > 1:
searchString= sys.argv[1]
# initialize db
db = CouchDatabase(
#create view
design_doc = "matts_cli_client"
map_js = "function(doc) {emit (doc.last_
db.add_
# results=
results=
# print matching results
for x in results:
if searchString.
print x.value[
if "email_addresses" in x.value:
for y in x.value[
-------
This seems pretty functional for now, and I can figure out the best way
to format the output for emacs or mutt a little later. First, though,
I'd like to improve the search a little. This very simple function lets
me search on last name only, but it would probably be better if it
checked for matches in a variety of fields -- say first, last, and all
the email addresses. It'd be great if my key, in the view, were a
combination of those fields (so that key looked like, say, "Matt Price
<email address hidden>"). In fact, if I could just get the map/reduce
functions to produce a sequence such keys, that's really be all i need.
So I just wondered whether you could tell me where the evolution
contact-search design document is, so I could usei t as a model -- I
assume it's somewhere, but i didn't find it in a quick search through
the source of evolution-couchdb and desktopcouch.
Thanks as always!
Matt,
You may not know that a map function can emit more than one document. So:
import sys
from desktopcouch.
from desktopcouch.
# define the search string
searchString=""
if len(sys.argv) > 1:
searchstring= sys.argv[1]
# initialize db
db = CouchDatabase(
#create view
design_doc = "matts_cli_client"
map_js = """function(doc) {
if (doc.last_name) emit(doc.
if (doc.first_name) emit(doc.
for (k in doc.email_
}
}"""
db.add_
# results=
results=
# print matching results
for contact in results[
print "%(first_name)s %(last_name)s" % contact.value
if "email_addresses" in contact.value:
for eml in contact.
print " %s" % contact.
Hi Matt, there are some initial Python API docs in the desktopcouch/
records/ doc directory in the tarball or branch, try and see if they make sense.
Once you have any integration working with mutt or emacs, it'd be great to have a look at it!
|
https://answers.launchpad.net/desktopcouch/+question/85917
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
Shell Developer's Guide
The Windows UI provides users with access to a wide variety of objects necessary to run applications and manage the operating system. The most numerous and familiar of these objects are the folders and files that reside on computer disk drives. There are also a number of virtual objects that allow the user to do tasks such as sending files to remote printers or accessing the Recycle Bin. The Shell organizes these objects into a hierarchical namespace, and provides users and applications with a consistent and efficient way to access and manage objects.
In this section
- Security Considerations: Microsoft Windows Shell
- Guidance for Implementing In-Process Extensions
- Shell and Common Controls Versions
- Implementing the Basic Folder Object Interfaces
- Implementing a Custom File Format
- Shell Extensibility (Creating a Data Source)
- Implementing Control Panel Items
- Supporting Shell Applications
- Legacy Shell Topics
Document Conventions
The Shell Developers's Guide follows the usual Windows Software Development Kit (SDK) document conventions. Two conventions in particular should be noted:
- Sample code omits normal error-correction code. This code has been removed only to make the code more readable. You should include all appropriate error-correction code in your own applications.
- To make sample registry entries clear, key, subkey, and entry names as well as values are displayed with a standard font. User or application-defined item names are italicized.
Show:
|
https://msdn.microsoft.com/en-us/library/bb776778(d=printer,v=vs.85).aspx
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
Details
Description).
Issue Links
Activity
- All
- Work Log
- History
- Activity
- Transitions
Integrated in Hadoop-Common-trunk-Commit #367 (See)
HADOOP-6884. Add LOG.isDebugEnabled() guard for each LOG.debug(..). Contributed by Erik Steffl
I have committed this. Thanks, Erik!
Thanks everyone again.
HADOOP-6884-0.22-1.patch 'ant test' and 'ant test-patch' results (ran manually because Hudson is down).
Results of 'ant test':
BUILD SUCCESSFUL
Total time: 6 minutes 44 seconds
Results of 'ant test-patch':
[exec] There appear to be 2 release audit warnings before the patch and 2 release audit warnings after applying the patch.
[exec]
[exec]
[exec]
[exec]
[exec] -1 overall.
[exec]
[exec] +1 @author. The patch does not contain any @author tags.
[exec]
[exec] +1 tests included. The patch appears to include]
Javadoc warnings are unrelated to the patch:
[exec] [javadoc] Constructing Javadoc information...
[exec] [javadoc] /home/steffl/work/svn.isDebugEnabled/common-trunk/src/java/org/apache/hadoop/security/SecurityUtil.java:39: warning: sun.security.jgss.krb5.Krb5Util is Sun proprietary API and may be removed in a future release
[exec] [javadoc] import sun.security.jgss.krb5.Krb5Util;
[exec] [javadoc] ^
[exec] [javadoc] /home/steffl/work/svn.isDebugEnabled/common-trunk/src/java/org/apache/hadoop/security/SecurityUtil.java:40: warning: sun.security.krb5.Credentials is Sun proprietary API and may be removed in a future release
[exec] [javadoc] import sun.security.krb5.Credentials;
[exec] [javadoc] ^
[exec] [javadoc] /home/steffl/work/svn.isDebugEnabled/common-trunk/src/java/org/apache/hadoop/security/SecurityUtil.java:41: warning: sun.security.krb5.PrincipalName is Sun proprietary API and may be removed in a future release
[exec] [javadoc] import sun.security.krb5.PrincipalName;
[exec] [javadoc] ^
[exec] [javadoc] /home/steffl/work/svn.isDebugEnabled/common-trunk/src/java/org/apache/hadoop/security/KerberosName.java:31: warning: sun.security.krb5.Config is Sun proprietary API and may be removed in a future release
[exec] [javadoc] import sun.security.krb5.Config;
[exec] [javadoc] ^
[exec] [javadoc] /home/steffl/work/svn.isDebugEnabled/common-trunk/src/java/org/apache/hadoop/security/KerberosName.java:32: warning: sun.security.krb5.KrbException is Sun proprietary API and may be removed in a future release
[exec] [javadoc] import sun.security.krb5.KrbException;
[exec] [javadoc] ^
[exec] [javadoc] /home/steffl/work/svn.isDebugEnabled/common-trunk/src/java/org/apache/hadoop/security/KerberosName.java:81: warning: sun.security.krb5.Config is Sun proprietary API and may be removed in a future release
[exec] [javadoc] private static Config kerbConf;
[exec] [javadoc] ^
[exec] [javadoc] ExcludePrivateAnnotationsStandardDoclet
[exec] [javadoc] Standard Doclet version 1.6.0_06
[exec] [javadoc] Building tree for all the packages and classes...
[exec] [javadoc] Building index for all the packages and classes...
[exec] [javadoc] Building index for all classes...
[exec] [javadoc] Generating /home/steffl/work/svn.isDebugEnabled/common-trunk/build/docs/api/stylesheet.css...
[exec] [javadoc] 6 warnings
I've got a feeling that Scott is very strong about moving Hadoop to SLF4J.
Moreso that if there was going to be a lot of work to build custom wrappers and change logging signatures across the code-base, that SLF4J should be a strong contender since that is what it does.
If building such a thing is outside the scope here, then guard statements surely work.
In a blue-sky world, SLF4J (or some framework) would modify bytecode to push its guard statement from inside its log method to around it at call sites. But now I'm just dreaming.
I have had good experience with a proprietary logging wrapper that lowered string concatenation churn caused by method invocations in the past – many years before SLF4J did it – and with an additional 200 or so method signatures to avoid autoboxing in 99% of use cases. A bit ugly, but very effective and we didn't have to teach developers when or why to guard their logging statements.
Thanks everyone for spending time on this issue. Again, I do not opposite the other ideas but they are just outside the scope of this issue. You are very welcome to work on them.
(Erik is running tests on his patch.)
I've got a feeling that Scott is very strong about moving Hadoop to SLF4J. This clearly does not fall within the scope of this issue as I explained before. But if you would like to invest in this subject you are welcome to create an issue and provide arguments for the transition, may be your experience with SLF4J in Avro, performance comparison, etc.
Konstantin, thanks very much for running this benchmark! It sounds like that, or any other LOG.debug() call that might be called while servicing an otherwise inexpensive RPC request, should be guarded with an isDebugEnabled() or replaced with an slf4j or some other wrapper that avoids string concatenation.
I think Doug's request to provide benchmarks is reasonable. We should benchmark more.
So I modified NNThroughputBenchmark, to benchmark getFileStatus(). In current trunk getFileStatus() calls Groups.getGroups(), which has a LOG.debug() in it. In NNThroughputBenchmark you can specify a logLevel, so if I run it with logLevel = DEBUG the debug message from Groups.getGroups() is printed for every getFileStatus() call, and no other messages are printed.
I compared two versions of the code with logLevel = INFO:
- Unmodified Groups.getGroups(), where the argument for the debug call is calculated as a sum of strings.
- Modified Groups.getGroups() with if(LOG.isDebugEnabled()) before the logging statement.
Results:
- 10,000 calls of getFileStatus() in (1) yield 22,905 ops/sec
10,000 calls of getFileStatus() in (2) yield 24,610 ops/sec
This about 7% difference.
- I increased the namespace to 100,000 files and performed that many calls of getFileStatus() for both setups. I still see 3-4% improvement in case (2).
The gain is less because it is absorbed by the increased cost of navigating down the namespace tree.
- Did the same with 1,000 files. (2) is better about 12-13%.
Groups.getGroups() is a very popular method it is a part of user authentication, so this log message is a part of each and every name-node call. In many cases the cost of concatenating strings in it will be absorbed by disk io operations, but it is still good to have this improvement.
I'll post the patch for NNThroughputBenchmark in another jira shortly.
I had posted my +1 for the patch earlier. But echoing what Eli has said already, we should open another jira and explore some of the good ideas that this jira has generated.
Konstantin and I have +1'ed earlier but no one has -1'ed yet. I will commit Erik's patch 24 hours later if there is no -1.
+1.
Doug, I don't see your point - the patch only makes the code consistent, as I mentioned previously lot of debug() calls are already guarded by isDebugEnabled().
If I started a new practice or tried to bring in new coding style/practice I would understand your point. But defending inconsistent code?
Patch might not be nice to look at but:
- it's best from performance viewpoint
- even though it adds code still LESS code is executed
- code is less readable on small scale (individual statement is longer) but more readable on big scale - no surprises (all debug messages are same), easy to maintain (easy to find debug messages that are not guarded thus potential performance problems), it's obvious that debug message is optimal (don't need to investigate whether it's on critical path, whether it's good enough - in some cases the arguments themselves might be expensive, i.e. it's a function call or a new object is created)
- it's relatively standard practice (i.e. java language doesn't offer much better solution, as seen by this long discussion)
.
This issue draw up an unbelievably long discussion. Erik's patch is great. Of course, there is a room for improvement as always, and as many of other patches. Quite a few people have proposed "better" ideas but only Erik has provided patches. I have fine with ideas like Doug's or Konstantin's as long as some one is willing to work on it. IMO, trying to make the logging perfect is a kind of wasting resources in this community.
Konstantin and I have +1'ed earlier but no one has -1'ed yet. I will commit Erik's patch 24 hours later if there is no -1.
What has been shown is that over 80% of the performance problem, if it exists (I believe it does, but no one has actually shown that here or in the sibling tickets), is corrected by removing String concatenation in favor of varargs and Object[] creation.
So therefore Object[] creation is at least 5x faster than string concatenation. Generally speaking, autoboxing and transient creation of Object[] would be similar in cost, so it is probably closer to 10x faster.
Pushing on the SLF4J project (which is actively developed, unlike Log4j and commons-logging, etc) to add a few method signatures that reduce Object[] creation and autoboxing feels like a better idea to me than re-inventing SLF4J – which was created in the first place to solve this problem along with a few others.
No, I have not demonstrated that the cost string concatenation in debug existing log messages is insignificant, but neither has anyone else to my knowledge shown that this cost is significant. I'm willing to accept optimizations that don't make the code any larger without proof of their significance, but optimizations that increase code size should first be shown to significantly improve performance, no?
> ... , given that even that the string concatenation has not shown to be significant, ...
Doug, have you done any benchmark to support the above statement?
> SLF4J [ ...] has the regular method void debug(String msg) so people will just use it as they did before.
Not if we have an automated test that looks for string concatenation in log statements. That's required for your proposal too, no?
> Migrating to SLF4J will mean changing logging for all levels in the entire project(s).
SLF4J wraps log4j, just like commons logging. So existing log configuration & format should be unaffected.
> creating extra object[] is a performance problem
No performance problems have actually been identified. Some have been suggested, but none shown to be significant in real operations.
That said, micro-benchmarks show that, were there a performance problem, getting rid of string concatenation would reduce the impact of logging by around 90%. Removing Object[] allocation would improve it further, but it's not clear, given that even that the string concatenation has not shown to be significant, that this would be a significant improvement. Plus, the majority of log statements have a single parameter and would not require an Object[] allocation. So, unless someone can show that Object[] allocation in multi-parameter log statements is slowing things significantly, slf4j would suffice.
I am reluctant to migrate to SLF4J at the moment for 3 main reasons.
- Yes, SLF4J does provide a way to log more efficiently, but it also has the regular method void debug(String msg) so people will just use it as they did before. In my approach I intentionally excluded it from the api.
- Migrating to SLF4J will mean changing logging for all levels in the entire project(s). This issue is targeting only debug messages.
Logging is a big deal. apache.commons worked for us for a long time, will SLF4J - I don't know. There are analytic tools out there relying on the format of the logs, will SLF4J break them? All this needs investigation. Just optimizing debug logging will not affect anything except improving the performance.
- In fact, people in this discussion mentioned that creating extra object[] is a performance problem. I tried to address this issue in my proposal. And I think this makes a big difference compared to SLF4J, because 90% of the calls will end up executing the optimized versions of logDebug().
Scott> This is not much different than just using SLF4J [ ... ]
I too would prefer we simply switch to slf4j. In performance-critical code that requires complex log messages we might still ocasionally use isDebugEnabled(), as we might make other local code optimizations in critical sections. To be clear, I find Konstantin's approach acceptable, but feel a move to slf4j would be simpler with essentially the same benefits.
String concatenation is the biggest potential performance problem. Adding an automated test that detects string concatenation in log statements would be a good addition. Before we can add that test though, we need a logging API that does not require string concatenation and does not bloat code. Slf4j and Konstantin's approach both provide one.
This is not much different than just using SLF4J, the only difference is that there would be some extra Object[] creation with SLF4J (which is really not a big performance problem, about the same as auto-boxing). But SLF4J would have a much faster and easier to use formatting implementation, and have performance benefits for INFO and other debug levels.
Additionally, we can still create a wrappper around that to add signatures to prevent object array creation and/or autoboxing for a limited subset of cases.
In short, why not just use SLF4J's more featured and flexible wrapper and either extend that or push for adding a few extra signatures on that side? It even has a tool to go through the source code and change stuff from log4j for you.
Konstantin, this sounds like it could be a workable approach.
This issue
Based on what is said in this jira and discussions with developers I'd like to make the following proposal.
There are 3 main goals the change should be targeting for:
- debug() statement optimization: do not calculate the arguments if they are not going to be logged.
- code readability: avoid if-debug-enabled statement before logging.
- new code verification: provide an automatic procedure or simple rules to verify that the newly introduced code does not introduces that same inefficiencies.
My approach is a variant of Doug's proposal and SLF4J formatting.
- Let's create a new class org.apache.hadoop.log.Logger
We will have only static methods in the class for now:
class Logger { // Optimized logging static void logDebug(Log log, Object a1); static void logDebug(Log log, Object a1, Object a2); static void logDebug(Log log, Object a1, Object a2, Object a3); static void logDebug(Log log, Object a1, Object a2, Object a3, Object a4); // not efficient, do not use on performance critical paths static void logDebug(Log log, Object... args); static void logDebug(Log log, String formatter, Object... args); static void logDebug(Log log, Throwable e, Object... args); }
- Each statement
Log.debug(a1 + a2 + ...)
in current code is replaced by one of the methods above:
Logger.logDebug(Log, a1, a2, ...)
Looking at our code, about 90% of debug calls will be replaced by the first 4 methods,
which do not compute the arguments and do not instantiate an array object.
Most of the rest debug calls will be converted into a logDebug(Log log, Object... args) call.
Which is less efficient, but still faster than building a string.
The remaining calls can be transformed into "formatter" version. We should make sure these are not
performance-critical parts of the code, or otherwise rewrite the messages to be able to call one
of the optimized versions of logDebug().
Autoboxing of primitive types should be ok with these less efficient versions of logDebug().
- The implementation of logDebug() should use optimized string building tools:
static void logDebug(Log log, Object a1, Object a2) { if(!log.isDebugEnabled()) return; StringBuilder str = new StringBuilder().append(a1).append(a2); log.debug(str); }
- The JavaDoc for logDebug() methods should state which methods are optimized, and
what to avoid when calling them, like primitive type autoboxing, string concatenation, etc.
This will give us all three goals above, up to a point. Efficiency and readability should be obvious.
The tool to verify should be checking that there are no "naked" calls of ".debug" anywhere but
in the Logger. And that calls of logDebug() do not contain "+" operations.
This is not the ideal solution, but it is practical imo.
Future directions
In the future we can turn org.apache.hadoop.log.Logger into a log wrapper by converting
the static methods into non static, and adding methods logInfo(), logError(), logWarn().
This should optimize logging in Hadoop in general. Non optimal string building in info-level
logging hearts the performance right now as I write it.
The idea is to replace org.apache.commons.logging.Log with org.apache.hadoop.log.Logger.
logging.Log will become a member of the Logger.
This is a bigger change and should be done in a different jira.
The unanswered question is whether this patch does anything useful for the project. Does it actually make Hadoop measurably faster? Shouldn't we demonstrate that for optimizations that increase the code size, a cost/benefit analysis?
I agree with Erik. I am actually surprised by how many comments this simple proposal has generated!
My +1 for committing the patch with a minor change - we do not need isDebugEnabled() check for tests.
To put things in perspective:
- the practice of using isDebugEnabled() is already common in Hadoop code, this patch just makes it more consistent
- HDFS 187 in patch, 86 before patch
- Common 63 in patch, 52 before patch
Note: patch adds isDebugEnabled to tests as well so the counts that matter are even lower.
None of the proposed solutions is solving all the problems and none of them is a short term solution. The only complaint against the patch is that it looks inelegant but from the discussion in this bug (and looking up others' effort to do the same) there's really no elegant solution for this problem in Java.
Claiming that patch adds code is a bit tricky. Yes it adds to total code written but in production runs it means less code being executed and less objects created (in alternative proposals even if the objects are not stringified they are still created). Given that we already have problems with GC that seems like a good idea.
We can still work on a long term better solution, once we have one we would have to convert all the code to the new solution anyway (at least remove already existing isDebugEnabled calls). Patch does not add significant (possibly none) additional work for that case.
>.
> What does this show? ...
Your benchmark also shows that LOG.isDebugEnabled() provides the best results. A patch using this approach is already there. Do you really want to stop committing it?
There may be other comparable solutions out there but (1) we don't have a patch yet, (2) it may require further modifying the codes and (3) the performance is not as good as LOG.isDebugEnabled().
I forgot to mention that my benchmark only measures running time but it does not account the GC overhead induced by unnecessary object creation due to auto-boxing. This is another reason that LOG.isDebugEnabled() is better than the other suggested solution.
> You've benchmarked a case that isn't in the existing patch (3 variables) and certainly isn't typical. ...
Which patch are you talking about? I found at least two three-variable cases in Erik's latest patch.
- LOG.debug("Exception while invoking " + method.getName() - + " of " + implementation.getClass() + ". Retrying." - + StringUtils.stringifyException(e)); + if(LOG.isDebugEnabled()) { + LOG.debug("Exception while invoking " + method.getName() + + " of " + implementation.getClass() + ". Retrying." + + StringUtils.stringifyException(e)); + }
- LOG.debug("for protocol authorization compare (" + clientPrincipal + "): " - + shortName + " with " + user.getShortUserName()); + if(LOG.isDebugEnabled()) { + LOG.debug("for protocol authorization compare (" + clientPrincipal + + "): " + shortName + " with " + user.getShortUserName());
You've benchmarked a case that isn't in the existing patch (3 variables) and certainly isn't typical. Your 'static' case still boxes.
So your benchmark mostly shows that boxing costs, and string concatenation costs even more yet. I don't see how it shows that any of these costs are significant in the Hadoop codebase.
I've attached a version that benchmarks the more typical single-parameter using slf4j and a static version that avoids boxing.
The output this gives is:
java.version = 1.6.0_20
java.runtime.name = Java(TM) SE Runtime Environment
java.runtime.version = 1.6.0_20-b02
java.vm.version = 16.3-b01
java.vm.vendor = Sun Microsystems Inc.
java.vm.name = Java HotSpot(TM) Server VM
java.vm.specification.version = 1.0
java.specification.version = 1.6
os.arch = i386
os.name = Linux
os.version = 2.6.32-24-generic-pae
n=10000000
LOG.isDebugEnabled(): 13 ms
static debug(..) : 78 ms
LOG.debug(..) : 472 ms
What does this show? It shows that simply switching to slf4j using format strings would remove string concatenation costs speeding log statements by around 10x without any bloat. It does not show whether that improvement would be significant in any larger Hadoop benchmark, but at least the cost in code readability would be null. Further it shows that, if we find that logging costs are significant somewhere due to boxing, we could optimize that with static methods, gaining another 5x without losing any readability. Finally, if we find that logging costs are still somewhere significant, we can improve them yet another 6x with some impairment to readability. This is a bit like manual loop unrolling. We'll do it in certain performance-critical areas when it's shown to provide a significant advantage, but we shouldn't do it blindly for every loop.
FunAgain.java: Benchmarks comparing the following calls
- LOG.isDebugEnabled()
- static debug(..) as suggested by Doug
- LOG.debug(..), i.e. do nothing.
Result 1:
java.version = 1.6.0_10 java.runtime.name = Java(TM) SE Runtime Environment java.runtime.version = 1.6.0_10-b33 java.vm.version = 11.0-b15 java.vm.vendor = Sun Microsystems Inc. java.vm.name = Java HotSpot(TM) 64-Bit Server VM java.vm.specification.version = 1.0 java.specification.version = 1.6 os.arch = amd64 os.name = Linux os.version = 2.6.9-55.ELsmp n=10000000 LOG.isDebugEnabled(): 82 ms static debug(..) : 502 ms LOG.debug(..) : 11644 ms
Result 2:
java.version = 1.6.0_16 java.runtime.name = Java(TM) SE Runtime Environment java.runtime.version = 1.6.0_16-b01 java.vm.version = 14.2-b01 java.vm.vendor = Sun Microsystems Inc. java.vm.name = Java HotSpot(TM) Client VM java.vm.specification.version = 1.0 java.specification.version = 1.6 os.arch = x86 os.name = Windows XP os.version = 5.1 n=10000000 LOG.isDebugEnabled(): 172 ms static debug(..) : 547 ms LOG.debug(..) : 40421 ms
+1 on committing the existing patch. We may work on the static debug(..) in a separated JIRA.
Nicholas> I have checked the slf4j Logger API but does find a method like that.
That's right. I said, such an API "need not be a varargs call, but can a normal, 4-arg method call". With slf4j it would be varrargs. Also note, however, that the current patch doesn't contain any log examples with three parameters.
Scott> The autoboxing and varargs is significantly less expensive than string concatenation.
Scott> Sun's JVM will already avoid the varargs Object[] construction, and the Double, but not the Long or Integer, if +UseEscapeAnalysis is on. That flag becomes the default soon.
Good points, Scott.
This issue lacks benchmarks. Proposed optimizations should include benchmarks. The following article has comments that indicate that, as Scott suggests, autoboxing and varargs are pretty fast. With escape analysis they might be even faster.
If someone adds debug log lines in performance sensitive code, e.g., when calculating CRC32 or somesuch, then explicitly calling isDebugEnabled() would probably be a significant optimization, but such cases are rare, and performance sensitive code should be modified cautiously anyway, since other minor changes can have big effects.
Doug, sorry that I misspelled you name.
>The way you'd write this is:
>
>LOG.debug("a={} b={} c={}", a, b, c);
Dong, I have checked the slf4j Logger API but does find a method like that. There is one requiring creating an array, debug(String format, Object[] argArray). From the FAQ link you provided earlier, it also suggests creating an array in this case. Below is quoted from the FAQ.
If three or more arguments need to be passed, you can make use of the Object[] variant. For example, you can write:
logger.debug("Value {} was inserted between {} and {}.", new Object[]Unknown macro: {newVal, below, above}
);
> LOG.debug("a=" + a + ", b=" + b + ", c=" + c);
The way you'd write this is:
LOG.debug("a={} b={} c={}", a, b, c);
This need not be a varargs call, but can a normal, 4-arg method call.
That said, numeric values would get boxed. In the attached patch there are some calls that pass a single numeric value that would get boxed. If we're concerned about the performance cost of the boxing in these cases, then we could define methods like:
public static void debug(String format, int i){ if (log.isDebugEnabled()) log.debug(format, i); }
public static void debug(String format, long l){ if (log.isDebugEnabled()) log.debug(format, l); }
public static void debug(String format, float f){ if (log.isDebugEnabled()) log.debug(format, f); }
public static void debug(String format, double d){ if (log.isDebugEnabled()) log.debug(format, d); }
If benchmarks show this to be a significant optimization, this might argue for creating a Hadoop-specific logging wrapper, as slf4j does not make this optimization.
Here's a random idea: use assertions for logging! So folks could write:
assert LOG.debug(....);
where LOG.debug() would need to return 'true'.
Thanks Luke for clarfying aspectj.
If aspectj can't do this, then its likely a no-go. Spring or Guice cannot do it either (they both change methods by overriding them, not wrapping them).
That leaves ASM or low level bytecode changes which definitely will work but would not be easy at all and would require significant testing. At this point, a feature like that should be part of the logging framework.
If I understand the slf4j API correctly, slf4j does not really help for the codes above. It has to create Integer, Long and Double objects for the primitive data type and an array for vararg.
It helps a lot, even with the autoboxing and varargs. A String has a hashCode and byte array member variable, a minimum memory overhead much larger than a boxed object.
The autoboxing and varargs is significantly less expensive than string concatenation.
int a = 0; long b = 1L; double c = 0.0d; LOG.debug("a={}, b={}, c={}", a, b, c);
In slf4j this requires allocating a varargs array with 3 elements (~32 bytes), and one Integer, one Long, and one Double (16, 24, and 24 bytes).
With log4j, it does not require the vararg array, but creates a string out of each number – which uses CPU and allocates more memory than just the boxed object, and then concatenates that into another String. The end result is at minimum 15 characters (30 bytes) + String overhead (~48 bytes), and the intermediate result overhead is larger. If any of the numbers are not so small, the overhead grows quickly.
The JVM is getting smarter with its Escape Analysis optimizations too – which will either eliminate many autobox and vararg object allocations or push them on the stack. I think Sun's JVM will already avoid the varargs Object[] construction, and the Double, but not the Long or Integer, if +UseEscapeAnalysis is on. That flag becomes the default soon.
Thanks Luke for clarfying aspectj.
Let's be academic again: one more fantasy idea is to write our own java compiler or JVM. It definitely will work.
For aspectj, first of all, we currently don't use aspectj in Hadoop except for testing. I beg there are people against using aspectj or other byte-code rewriting technique on the non-testing codes since it is hard to debug.
Let's be academic (as what I see in this issue) and ignore the statement above. I cannot see a simple solution on using aspectj. Clearly, if we define a point cut per LOG.debug(..), aspectj will works but this is too much of works and, in some cases, we need to change the codes in order to define a point cut. For the people suggesting aspectj and saying that it is simple, could you please provide a simple example?
I heard one suggestion from Luke yesterday. He claimed we could use the "around" advice. However, it seems to me that around does not prevent parameter evaluation. I have not tested it yet.
Did some digging on the aspectj wrapping approach. Looks like it's a no go performance wise, as it cannot solve the arguments building cost issue without fixing aspectj itself, as someone else tried to do exactly the same thing here:
Looks the most reasonable current course of action is just committing the patch as it looks correct, low risk and significantly lower gc stress (especially in namenode code, according to Suresh in offline discussions.)
We should file a separate jira to explore the switching to slf4j api (still using log4j as backend) approach. One open issue even with slf4j API is that it doesn't solve the autoboxing cost issue for primitive types, which we use a lot the logs: (old but still reflecting the current API design.)
int a = ...; long b = ...; double c = ...; LOG.debug("a=" + a + ", b=" + b + ", c=" + c);
If I understand the slf4j API correctly, slf4j does not really help for the codes above. It has to create Integer, Long and Double objects for the primitive data type and an array for vararg.]?
You can do this for info messages too for more performance when that is off. There are no extra lines of code or maintenance; it is the simplest solution. As long as the change is applied at build time, I don't see any real drawback.
.
> 1. Code must remain as readable/simple as possible and maintainable (no wrapping isDebugEnabled() in all of source code).
Does anyone believe that? I think the guards should ideally be limited to performance sensitive parts of the code. But if folks don't trust that can be maintained, then a warning when non-constant strings are logged might be acceptable. Constant strings should be as fast as the guard.
> 2. Code must perform best (no varargs, autoboxing, string concatenation, etc for unused debug lines)
We could easily implement a guard that both performs well and does not bloat code, e.g.:
public static void debugLog(String message) { LOG.debug(message); } public static void debugLog(String message, Object o1) { if (LOG.isDebugEnabled()) LOG.debug(MessageFormat.format(message, new Object[] {o1})); } public static void debugLog(String message, Object o1, Object o2) { if (LOG.isDebugEnabled()) LOG.debug(MessageFormat.format(message, new Object[] {o1, o2})); } public static void debugLog(String message, Object o1, Object o2, Object o3) { if (LOG.isDebugEnabled()) LOG.debug(MessageFormat.format(message, new Object[] {o1, o2, o3})); } ...
We can save ~296 lines from the patch if we exclude the test code:
[...hadoop]$ find */src/java -name \*.java | xargs grep '\.debug(' | wc -l 410 [...hadoop]$ find */src/test -name \*.java | xargs grep '\.debug(' | wc -l 148?
If we run it as a target of ant build, we can enforce people to maintain the guards automatically.
I'd like to avoid any potential false positives the script may identify. Checking for appropriate if blocks around costly debug logging should be a responsibility of the patch reviewer.
It must also be ensured that isDebugEnabled() is NOT wrapped around an info, warn, error, or fatal. Someone might check in a change that turns a debug into an info without removing the guard.
Luke> The problem with the current approach is that you'd have to keep patching people's new code manually.
I understand that Erik has a script, which automates the process of finding calls to debug() unguarded with isDebugEnabled(). If we run it as a target of ant build, we can enforce people to maintain the guards automatically.
The current patch seems to be a bit of a sledge hammer where a ball-peen might be more appropriate. However, since Java doesn't give us any way of lazily evaluating the arguments to a method, this check does seem to be a standard way of working around this limitation. I'm a bit surprised at the debug logs in the test cases. Doesn't it seem more reasonable to do all logging in tests at an always-on level?
Thanks Scott for a nice summary.
Actually, #1 is subjective. I don't think adding an if-statement per LOG.debug(..) make the codes hard to read or maintain. On the other hand, #2 is very important in production systems. Source-code/byte-code rewriting mentioned in #3 needs a lot of works. Why don't we just commit Erik's patch?
I have additional concerns about AOP. Herriot plans to introduce AOP based interfaces for exposing system internals. This should not be deployed on production clusters. Using AOP to check isDebugEnabled(), results in two flavors of AOP code. One that must be deployed on the system and the other that should not be. Confusion here could result in serious issues.
Second, with AOP, if some one deploys the version without AOP based debug check, there is no way to know if the check is happening or not, by the system behavior alone.
My vote is to keep it simple for now and check in the code changes that Erik has. This is a widely used java coding practice and not a huge code bloat. If others still feel that this causes code bloat, we could further choose one of the choices proposed in the jira. At that time, we need to any way remove all debug enabled checks that are currently being made.
At most two of these can be true:
1. Code must remain as readable/simple as possible and maintainable (no wrapping isDebugEnabled() in all of source code).
2. Code must perform best (no varargs, autoboxing, string concatenation, etc for unused debug lines)
3. The build system can't re-write code (AOP, source modification, or bytecode modification)
There appear to be "-1"'s that require the above to all be true. Therefore, this ticket will go nowhere until there is consensus to let one of the three be false. Implicitly, #2 is false if no action is taken.
As I indicated in my previous patch, java varargs is not a good idea. It creates object array to pass as varargs.
-1 to ASM-style work as it just complicates build/test cycles
-0 to SLF4J. I don't see the point. I use commons-logging as the front end with either Log4J or other tools as the back end.
Now, we could improve commons-logging to add varargs...
Leaving out test code doesn't change the situation.
Why? It would make the patches smaller and quicker to review.
build time - means we wouldn't be able to enable debug at runtime, that's a significant restriction compared to current state and limits troubleshooting capabilities.
AOP can insert if LOG.isDebugEnabled()) guards at build time, so you can definitely enable debug at runtime.
The problem with the current approach is that you'd have to keep patching people's new code manually.
Scott: build time - means we wouldn't be able to enable debug at runtime, that's a significant restriction compared to current state and limits troubleshooting capabilities.
Thats not what I meant. I mean that at build time, the source code:
log.debug("hello " + "world" + "!");
can be transformed into:
if (log.isDebugEnabled()) { log.debug("hello " + "world" + "!"); }
The transformation can be at the source code level prior to compilation OR at byte code level post compilation.
Obviously, that is a longer term option.
Do we have any information about how much improved the performance is with the patch here?
> code bloat is mostly relevant for people reading the code
Bloat hinders maintainability, a perennial goal.
Why not committing a simple patch now and work on a good solution later?
Luke: we all prefer fixing the tools. Consider the timeframe and effort required. In the meantime we can have proposed patch as an intermediate solution. Leaving out test code doesn't change the situation. Adding comments is also code bloat (code bloat is mostly relevant for people reading the code) plus it's not guaranteed to work (isDebugEnabled() is guaranteed to work but people will add a marker more or less randomly without considering price of debug call).
Scott: build time - means we wouldn't be able to enable debug at runtime, that's a significant restriction compared to current state and limits troubleshooting capabilities.
I appreciate comments but how about separating a short term quick fix and long term proper solution? At this point it's not clear what a proper solution is. If you want I'll file a jira for long term solution so that we don't forget about it? I mean we probably spent more time discussing this that it take to both create a patch and reverse it (once we have the proper solution).
One could use something like ASM to re-write log.debug() statements with the guard around it at class load time, or some build system trickery can do something similar at build time. An AOP tool could do this sort of thing.
I wonder if any logging tool project has already done such a thing?
I'd prefer fixing the tools instead of bloating the code. Some simple heuristics that could work:
- Skip checking all the test code, which is trivial, as they live in different trees.
- Put some marker, say a trailing /// for verified debug calls that are not expensive, so a grep -v '///$' can filter them out etc.
Re
Me too however then there's no way to automatically or at least semi-automatically check for the cases where the optimisation would be helpful.
If all debug() logging calls have isDebugEnabled() guard it's easy to check the whole codebase in few minutes (I have a simple script that only produces few false positives that need to be eyeballed). If the guard is used in minority of cases where it's actually useful then verification whether it's used everywhere where it's needed is very expensive.
Possibly better solutions but I don't think they are realistic in short/medium term:
- use scala and lazy values (arguments for debug() are not constructed unless used)
- use extensive profiling and see where debug() calls matter (this could be part of automatic builds/tests)
This bloats the code when the logged string is a constant. Also, in the vast majority of cases, this is not a significant optimization. I'd rather keep most code more readable and only perform the check in performance-critical code.
+1 the patch looks good.
I'll commit both debug-logging related patches if there are no other opinions.
Patch
HADOOP-6884-0.22-1.patch adds more changes that were previosuly missed (or new .debug() calls).
Alejandro, with the methods you are proposing, an array or Objects created to pass the varargs. It is still expensive.
The debug log check is critical for performance sensitive paths. Due to lack of macros in Java, log wrappers doesn't really help if additional objects need to be created for the messages (the template stuff helps but not in many cases.)
-1 for yet another home brew log wrappers.
+1 for slf4j.
While common practice, this result in lot of code uglyfication/noise.
I'd rather suggest the following (we are doing this in Oozie):
Create a Log wrapper, XLog which extends Log and provides the following additional methods:
info(String msgTemplate, Object ... args)
warn(String msgTemplate, Object ... args)
debug(String msgTemplate, Object ... args)
trace(String msgTemplate, Object ... args)
In each one of these methods, if the log is enabled use the JDK MessageFormat class to create the log message and call the corresponding Log method, else do nothing.
Caveat: the last args has to be tested for being a Throwable and if so the corresponding Log method with (String, Throwable) signature.
Integrated in Hadoop-Common-trunk #437 (See)
HADOOP-6884. Add LOG.isDebugEnabled() guard for each LOG.debug(..). Contributed by Erik Steffl
|
https://issues.apache.org/jira/browse/HADOOP-6884?focusedCommentId=12902985&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
Configuration Blues
Have you ever noticed how some applications seem to configure themselves? I
don't mean that they auto-detect their settings; rather, the configuration
process and tools are so well designed that they are a pleasure to use.
Like most things in development, this level of functionality didn't appear
by accident. "Application configuration deserves careful design -- perhaps
even more than application code." (Halloway, 02) If we want to offer a
similar experience to all our users, we need to stop treating configuration
as an afterthought.
There are a number of options available to Java developers when it comes to
implementing configurations. This article begins with the classic
Properties class and continues on to the new
Preferences model. It ends with
an overview of Java Management Extensions (JMX). Along the way, I discuss
the various strengths and weaknesses of each option, and attempt to place
them in the broader context of a "configuration language." In other words,
if there was an ideal configuration model, what attributes and processes
would the model have, and how well does each of the existing options address
this ideal?
Although this article does not directly discuss auto-detection,
I strongly recommend this valuable technique when it's possible. There
is no benefit in requiring users to supply values that can be
programmatically determined. It's a nuisance to the user, and presents an
opportunity for error. On the other hand, there are limits to
auto-detection, especially in a platform-independent environment like Java.
Without belaboring the point, no auto-detected value should ever be treated
as more than a default. There needs to be an override available. The
easiest way to enable this is to provide a configurable property.
Auto-detection doesn't do away with properties, it underscores the need for
them.
Property Lines
Typically, we approach the design of our application's configuration in an
ad hoc manner. In other words, we add properties as they occur to us during
development. Perhaps the primary reason for this is because we can. Most
Java applications handle configuration using the
Properties class. This
class allows us to add properties at will. For instance, the following code
shows how we can easily create, set, and save a couple of properties with a
minimum of fuss.
try {
// Create and set the properties
Properties properties = new Properties();
properties.setProperty("propertyOne", "valueOne");
properties.setProperty("propertyTwo", "valueTwo");
// Save the properties
properties.store(new FileOutputStream("app.properties"),"header");
}
catch (Exception e) {
// Handle exception as needed
}
Retrieving the values of the properties is no harder.
try {
// Load the properties
Properties props = new Properties();
properties.load(new FileInputStream("app.properties"));
// Retrieve individual properties
String prop1 = properties.getProperty("propertyOne","defaultOne");
String prop2 = properties.getProperty("propertyTwo,"defaultTwo");
}
catch (Exception e) {
// Handle exception as needed
}
Given the ease with which the
Properties class allows us to create simple
configurations for our applications, there is little incentive to do more.
Unless you are building an application that will be widely deployed, there
doesn't seem to be much concern given to the task. In addition, developers
are usually judged by other criteria. The unconscious use of the
Properties
class seems almost inevitable.
For all its inevitability, though, this behavior does have its problems.
Since configurations are so easy to implement using the
Properties class,
and there is no immediate penalty associated with its use, cleaning up an
application's configuration can continue to be treated as an end-game
activity; the mess will be picked up later. Unfortunately, this laissez faire practice can lead to the duplication of properties and coding effort. Furthermore, individual properties and their syntax often go
undocumented in the heat of development. This can easily cause later
conflict and confusion regarding them. Finally, there is usually little or
no validation applied to most of these ad hoc properties. In my own
experience, I can't count the number of times I've been brought in to
troubleshoot an application where the problem was due to an incorrect or
missing property value that was not properly checked.
A secondary concern with all configurations, especially those built using
the
Properties class, is the ease in which we create global data. One of
the tenets of object-oriented programming is the encapsulation of data.
There are no global variables in OO. Yet, encapsulating all the data is
difficult, and many of us, myself included, will resort to tricks. One such
trick is to create singleton classes that are actually little more than
clusters of global variables. Much the same may be said about our use of
the
Properties class. Many applications are riddled with small chunks of
code like the samples shown above. We get properties. We set properties.
We don't normally think of this as a way of skirting around the data-encapsulation rule, but its net effect is just that. We silently introduce
global variables, and then don't see them as such.
The costs associated with these problems vary. Certainly, direct support is
the most obvious. Subtler costs may arise with awkward deployments and lack
of central management. Poorly implemented configurations may be difficult
or impossible to administer remotely. End users may be prevented from
self-servicing their applications. At its worst, code changes may be
required for something that could easily have been accomplished by a
property, had configuration been duly considered. The more ad hoc the
process, the less likely it is to minimize these costs.
Preference for Preferences
In some regards, the
Preferences class is a vast improvement over
Properties. Like its predecessor,
Preferences is lightweight and very
straightforward to use. It differs from
Properties in at least two ways.
First, it introduces some notion of scope. It distinguishes between system
and user preferences, and ties each preference to an individual class.
Second,
Preferences allows you to remain ignorant of where and how your
configuration is persisted. The
Properties class requires you to supply an
OutputStream to its store method. You can't help but be aware of where and
how you're persisting the configuration. In contrast, the
Preferences class
doesn't have an explicit store method, much less a way to direct its
persistence. A pluggable adapter handles the messy details for the class.
The code below shows how easy it is to retrieve preferences from a
configuration. In this example, it gets the location and size of a window
for the current user. Note that there is nothing in the code regarding how
to retrieve these values. It specifies scope when it requests the
Preferences object, and leaves the rest to the plugged adapter.
// Get the Preferences object. Note, the backing store is unspecified
Preferences preferences = Preferences.userNodeForPackage(this);
// Retrieve the location of the window, default given in 2nd parameter
int windowX = preferences.getInt("WINDOW_X", 50);
int windowY = preferences.getInt("WINDOW_Y", 50);
// Retrieve the size of the window, default given in 2nd parameter
int windowWidth = preferences.getInt("WINDOW_WIDTH", 300);
int windowHeight = preferences.getInt("WINDOW_HEIGHT", 100);
Like
Properties, persisting under
Preferences is also very easy. The
following example saves a window's size and location for the current user.
It assumes that the variables
windowX,
windowY,
windowWidth, and
windowHeight have already been set elsewhere.
// Get the Preferences object. Note, the backing store is unspecified
Preferences preferences = Preferences.userNodeForPackage(this);
// Save the window location and size
preferences.putInt("WINDOW_X", windowX);
preferences.putInt("WINDOW_Y", windowY;
preferences.putInt("WINDOW_WIDTH", windowWidth);
preferences.putInt("WINDOW_HEIGHT", windowHeight);
On the whole, the
Preferences class is no more difficult to use than
Properties. In fact, since the persistence mechanism is transparent, it
could be argued that
Preferences is even easier to use. On the other hand,
this isn't necessarily what's needed. The problem hasn't been that the
technology is too difficult to use, but rather that it might be too easy.
Both
Preferences and
Properties allow and encourage us to carry on in an ad
hoc manner. There is nothing in either to compel us to change our ways.
Towards a Configuration Language
Now, we could all promise to do better in the future, and to treat
configurations with the consideration they deserve. But this doesn't
really address the issue. Good intentions aside, the real problem is we
have nothing with which to replace our bad habits. Stuart Dabbs Halloway
has spent a fair amount of time exploring this topic in a series of articles
entitled "Java Properties Purgatory." Not content to spend his days in
limbo, Halloway proposes a way out.
He begins with a brief overview of properties, and how they are used and
misused by Java developers. From there, he builds a case for a
"configuration language" with the following four elements:
Structure: A producer/consumer agreement on how to pass configuration information back and forth. An optional type system allows some validation of the information.
Lookup: A way to query configuration information when needed.
Scope: Configuration information binds to specific code and data.
Metadata: Metadata associated with the configuration information should be exposed in a standard format. This can be exploited by tool builders and automated configuration tasks.
In other words, any well defined, generally applicable configuration model
will incorporate all four elements.
He proceeds to examine JNDI, RMI, and Java Security configurations in light
of these four elements. He finds all three Java technologies lacking. In
terms of structure, each has its own way of passing information
back and forth. For instance, multi-valued properties are delimited
differently under each. Furthermore, they all have different lookup rules
and limited success in addressing scope. In fact, the only agreement among
the three is their total lack of support for metadata.
Halloway then looks to XML as an answer. He believes the basic elements
of the configuration language can be found in the J2EE Web Application
configuration XML. Not only does it have a well defined structure, but it
possess unambiguous lookup rules, and is capable of specifying scope. The
biggest flaw Halloway finds with the model is that it is not generic. It
works well for its purpose, but is too domain-specific to meets the needs of
a general configuration language.
The series ends with Halloway's first cut of a design for what he calls a
configuration interface. He presents five objectives that he feels will
move us towards a better configuration language.
- Generalize the Preferences API's notion of scope to hook in arbitrary providers.
- Make XML a first-class citizen in the Preferences API.
- Explicitly permit some backing stores to be read-only.
- Add metadata support to the Preferences API.
- Provide an auditing mechanism to track where configuration information comes from.
The generalization of the Preferences API to support arbitrary providers is
an important feature. Not too long ago, I was peripherally involved in a
project where the developers needed to unify several legacy configurations.
These configurations were persisted in a hodge-podge of backing stores
including properties files, XML documents, and even database tables. The
developers successfully completed the project by migrating all existing
backing stores into one, and creating a single administration console.
However, this only worked because the developers owned the legacy
configurations, and were able to migrate them into one shared source.
Arbitrary providers for the Preferences API would allowed developers to
leave third-party backing stores in place, and write wrappers to plug them
into a single administration console.
As for making XML a first-class citizen, the
Preferences class currently
supports the import and export of configuration information as XML.
Halloway would like to extend this by granting direct access to the
underlying data as an XML document. In other words,
Preferences should
allow the developer to get and set an XML document.
Metadata support is perhaps the most interesting feature in Halloway's
design. Configurations need to be able to enumerate their properties and
operations. Tools, such as an administration console, could query a
configuration, and display its properties and make use of its operations,
much like an IDE uses reflection to probe a class to built lists of methods
and parameters.
Third Time's a Charm
Much of what Halloway calls for already exists in Java Management Extensions
(JMX). JMX is the third alternative to building configurations in Java. It
is also the third attempt by Sun to create a more robust and scalable
configuration management system than what is found in either the
Properties
or
Preferences classes. The two earlier attempts were Java Management API
(JMAPI) and Java Dynamic Management Kit (JDMK).
The essence of JMX is the
MBean class. An
MBean represents a managed
resource. It advertises the configuration attributes and operations of the
resource, and makes them available to another system. The
MBean may be
static or dynamic, meaning of all the attributes and operations are known
beforehand, or they may be assembled and modified at runtime.
MBeans are
registered as agents with another system, which is typically some type of
centralized administration console. An
MBean may reside on the same system
alongside the console, or it may live on a remote node. One of the
beauties of JMX is that transport is completely hidden. Neither the console
nor the
MBean is aware of any intermediate protocol. This makes JMX ideal
for distributed environments, which in turn leads to the perception that it
is an enterprise-level technology, although there is nothing to prevent the
use of
MBeans locally.
Creating a standard
MBean involves little more than following a few naming
conventions. A standard
MBean for a given resource is defined by a Java
interface named
MyResourceMBean and a Java class,
MyResource,
that implements the
MyResourceMBean interface. For instance, the
MBean interface and implementation for the sample
MyService resource appears
below.
public interface MyServiceMBean {
void start();
void stop();
Integer getConnectionPoolSize();
void setConnectionPoolSize(int size);
}
public class MyService implements MyServiceMBean {
private int connectionPoolSize;
public void start() {
// Starts the service
}
public void stop() {
// Stops the service
}
public Integer getConnectionPoolSize() {
return new Integer(connectionPoolSize);
}
public void setConnectionPoolSize(int size) {
// adjust actual connection pool, then hold onto new size
this.connectionPoolSize = size;
}
}
Making use of the
MBean involves little more than registering it with an
MBeanServer, as shown in the code fragment below.
.
.
MBeanServer mbs = MBeanServerFactory.createMBeanServer();
MyService myService = new MyService();
ObjectName myServiceON = new ObjectName("domain:id=MyServer")
mbs.registerMBean(myService, myServiceON);
.
.
The key to appreciating JMX lies within this registration process. As an
MBean is being registered, the
MBeanServer builds an
MBeanInfo object.
MBeanInfo contains the metadata that describes the configuration interface
of the managed resource. In other words,
MBeanInfo holds a list of all
of the
MBean's attributes and operations. In the case of a standard static
MBean,
MBeanInfo is built using introspection during registration. The
MBeanInfo object for the sample
MyService would contain two operations,
start and
stop, and one attribute,
connectionPoolSize.
An alternative to introspection is to have the server directly ask the
MBean
for
MBeanInfo. This is the essence of a dynamic
MBean. The
MBean
itself remains in control of what it reveals to the server. Not only does
the
MBean remain in charge, it is also able to supply much more information
to the server than the server would be able to glean through simple
introspection. In particular, the
MBeanInfo structure allows for a much
richer description of the
MBean's attributes, operations, and notifications
than would be possible if the server were limited only to introspection.
It is through the
MBeanInfo object that JMX primarily addresses the four
elements of Halloway's configuration language.
MBeanInfo brings a uniform
structure, lookup method, and set of metadata to all configurable resources. Although scope remains a problem, properties are by their very nature global, and there is very little that can be done to eliminate this.
Fortunately, it is arguably the least problematic element in Halloway's
configuration language. Especially if the properties are being managed by
a robust, fully developed system such as JMX where the developer needs to go
well out of the way to circumvent the existing mechanisms.
It is the uniformity of structure, lookup, and metadata of JMX that yields
the most promise. It is through these features that tools may be built, and
the implementation of an application's configuration can become a true part
of the design and development process, rather than an afterthought.
Afterthoughts
This article began as a simple warning regarding the use of properties as global variables. However, as I
investigated configurations, and played around with other ideas, a different
theme began to emerge. Basically, configurations do matter. In many cases,
they are the first impression a user has of a given application. How
smoothly configurations go has a lot to do with how well the application is received.
Furthermore, the methods we use to develop our configurations have
non-obvious costs associated with them. Some methods encourage a laissez faire approach. Others require us to be a bit more considered. The more ad hoc the process is, the higher the long-term costs associated with
it. Finally, some methods can be better leveraged by development tools than
can others. Once the tools are in place to exploit this, the design and
implementation of an application's configuration can become a true part of
the development cycle.
Configurations appears to be a largely unexplored area of application
development. I look forward to delving into it a bit deeper, and hope to
publish at least one more article presenting a simple development tool that
begins to exploit the metadata features of JMX.
References:
"Preferences API"
An overview of the design, usage guide, and link to the Javadoc.
"Using the Preferences API", Daniel Steinberg, 2003.
A short, tech tip introduction to Preferences.
"The Preferences API in Java 2SE 1.4 (Merlin)", Jeff Brown, 2001.
An exploration of Preferences, including setting backing stores.
Java and JMX: Building Manageable Systems, Heather Kreger, Ward Harold, and Leigh Williamson, Pearson Education, Inc., 2003.
"Java Properties Purgatory" Part 1 and Part 2, Stuart Dabbs Halloway, 2002.
- Login or register to post comments
- Printer-friendly version
- 5142 reads
|
https://today.java.net/node/219409/atom/feed
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
Details
- Type:
Improvement
- Status: Closed
- Priority:
Minor
- Resolution: Fixed
- Affects Version/s: 1.3
- Fix Version/s: None
- Labels:None
- Environment:
Operating System: All
Platform: All
Description
As an enhancement to the current Digester API, I propose to add new variants to
the Rule methods begin(), body() and end(). Those new variants would accept the
namespace URI and local/qualified name of the current element.
As Rule is not an interface, but an abstract class, this change can be done
without hurting backwards compatibility. It will loosen the coupling between
Rule implementations and Digester, as the rule would get sufficient information
about the current element, without needing to call
Digester.getCurrentElementName(). In addition, it provides access to the current
namespace URI, which is not possible with the current version of Digester.
I'll be attaching a patch that implements these changes. This involves changes
to Rule.java, Digester.java and BeanPropertySetterRule.java. The latter change
is not totally necessary, but rather demonstrates the benefits of the proposed
API. All other Rule implementations could be changed later to remove the plenty
deprecation warnings.
[I've posted this patch to the commons-dev list before, I'm just adding it here
so it doesn't get lost]
Activity
- All
- Work Log
- History
- Activity
- Transitions
I don't think this is the best way to get info about the parsed element. With
this patch a rule will retrieve its state from 2 differents sources :
- the digester's parameter stack
- the arguments of begin()/body()/end() with the tag name and the URI
I believe it would be cleaner to widen the concept of parameter stack to a rule
state stack. A rule state object could contain :
- an org.w3c.dom.Element object providing the element name, its URI and
attributes (we don't need a complete implementation of the interface)
- a Map containing properties (the parameter array and the parent rule could fit
here)
First of all, thanks for looking at this
However, I don't agree with your analysis.
The parameter stack is (IMHO) a kind of hackish way of enabling collaboration
between CallMethodRule and CallParamRule. It has nothing to do with the state of
a rule in general (the state of a rule should always be stored in it's own
instance variables). A better way to enable rule collaboration is out of the
scope of this bug, however
.
Second, when we talk about rule state, we need to differentiate between state
regarding the input (i.e. the context in the XML document being parsed) and the
output (i.e. the object and parameter stacks). Originally, the rules got
information about the input state through the method args, like in
begin(Attributes) and body(bodyText). Later in the development of Digester the
need has emerged to also access the current element's name, specifically in the
BeanPropertySetterRule. At that time, changing the Rule class was avoided, and
the immediate need was fulfilled by directly accessing the "match" instance
variable of Digester and extracting the last segment. Digester does provide the
method getCurrentElementName() that does the same thing, but this method is also
a result of Rule not having the interface I propose here. Finally, as namespace
support was added relatively late in the process, their is as of today no way
for a Rule to get the current namespace URI.
There are two options to achieve giving the rules more information about the
input document context:
1. Let Digester maintain the context using a stack (Element or a more
lightweight class, whatever), and have the rule access that stack, or
2. Pass information about the current document context into the begin(), body()
and end() methods of the rule.
I see a lot of advantages about approach number 2.
- 90% of the current rules in Digester don't need information about the input
context, so maintaining the Element stack would be a total waste of resources.
- The Rule methods begin(), body() and end() themselves, and their current
arguments (Attributes and bodyText), already inform the rule about the input
context.
- Rule implementations are more decoupled from Digester.
- Arguably, a rule shouldn't have access to the full input context (i.e. the
elements under the top element in the stack), it should only need to access
information about the element it was matched on
- As said before, the class Digester is already bloated
Sorry for the long explanation, I hope it is understandable.
thanks very much for the perseverance Christopher. i think that after all that discussion
we've come to a very good solution.
- robert
Created an attachment (id=3235)
Unified diff containing the changes described above
|
https://issues.apache.org/jira/browse/DIGESTER-101
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
difficulties with the carousel ext.
Hey,
I just discovered the carousel extension and tried to integrate it into my website. Unfortunately I was only partly successful.
I am already using ext 2.2. I additionally included the ext-core-debug.js of 3.0 and carousel.js, but it told me ext.ux.carousel is not a constructor.
Then I tried to use it only with the ext-all.js of 2.2 and the carousel.js and deleted the 3.0 core stuff.
It partly works.
autoPlay works, but there is no navigation bar dropping down of the header.
Code:
new Ext.ux.Carousel('simple-example', { itemSelector: 'img', interval: 3, autoPlay: true, showPlayButton: true, pauseOnNavigate: true, freezeOnHover: true, transitionType: 'easeIn', navigationOnHover: true }) };
Another thing i discovered is, the images get listed below each other like this:Code:
<img> <img> <img>
How would u solve it? set all images to display none and switch it to block, after the gallery is rendered?
thx for help in advance
tobi
3.0 and 2.2? What?
no!
There can only be one Ext namespace.
do you have a public example?
Ext 3.0 Core is not Ext JS 3.0!
Jay Garcia @ModusJesus || Modus Create co-founder
Ext JS in Action author
Sencha Touch in Action author
Get in touch for Ext JS & Sencha Touch Touch Training
tobi
do you include the carousel.css file?Javier Rincón aka SysCobra
There is only ext-base.js at the moment, no 3.0 core.
the carousel.css is copied in my local stylesheet.
it works, but there is no navigation bar on top. the slideshow works
OK, what would you expect? it was built using Ext3-core.
i don't understand why this is an issue?
Jay Garcia @ModusJesus || Modus Create co-founder
Ext JS in Action author
Sencha Touch in Action author
Get in touch for Ext JS & Sencha Touch Touch Training
Hello,
Respectfully, I have this same issue.
Basically, I would like to use the Ext Core 3 beta carousel on an exsting ExtJS2.2 page.
Specifically, within a Ext.TabPanel and Ext.Viewport Borderlayout.
Unfortunately, what it sounds like is that this is not possible because:
1. Core 3 beta - because not all widgets are included in Core. Or, in...
2. ExtJS2.2.1 - because the carousel was built on core3.
Is there a best-case solution for either...
A. backward-compatability?
B. forward-compatability?
Thanks.
You seem to be missing the point of what Ext Core is.
It's not an add-on that you can mix-match with 2.x code. It's a subset of the full Ext 3.0 API, that's not intended to have any widgets. Almost all the code in Ext Core exists in Ext 2.x, although refactored somewhat to be standalone and have a smaller footprint.
Carousel is just a little demo of using some of core functionality - it's not intended to be a fullblown widget. I highly doubt that it's going to be backported to 2.x by Ext staff.
The upgrade path from 2.x is not Ext Core, it's Ext 3.0. Ext Core is intended for use on a site where you don't need the weight of the entire Ext 3.0 package.Tim Ryan
Read BEFORE posting a question / BEFORE posting a Bug
Use Google to Search - API / Forum
API Doc (4.x | 3.x | 2.x | 1.x) / FAQ / 1.x->2.x Migration Guide / 2.x->3.x Migration Guide
OK just checked and it doesn't work because Ext 2.2 doesn't have the events 'mouseenter' and 'mouseleaves'. This is to get the underlaying components of the container including the navigation panel. If you change them to the 'mouseover' and 'mouseout' (the ones that Ext 2.2 has) then you cant click the panel because when the mouse go to the navigation the container receives the 'mouseout' event and the navigation panel hides.
This only happens if you put freezeOnHover or navigationOnHover to true. Guess you can use it with the always show the navigation panel and without freezing the animation on enter. As in the other examples. (people can always click on stop and go one by one as they like and click play again to start playing).Javier Rincón aka SysCobra
- It's a subset of the full Ext 3.0 API, that's not intended to have any widgets.
yes.
Jay Garcia @ModusJesus || Modus Create co-founder
Ext JS in Action author
Sencha Touch in Action author
Get in touch for Ext JS & Sencha Touch Touch Training
|
https://www.sencha.com/forum/showthread.php?67094-difficulties-with-the-carousel-ext
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
Need help cloning? Visit
Bitbucket 101.
Atlassian SourceTree
is a free Git and Mercurial client for Windows.
Atlassian SourceTree
is a free Git and Mercurial client for Mac.
# program test_config_msys.py
-"""Test config_msys.py for against a dummy directory structure.
+"""Test config_msys.py against a dummy directory structure.
This test must be performed on an MSYS console.
"""
import os.path
import sys
-# Ensure the execution environment is correct
-if not ("MSYSTEM" in os.environ and os.environ["MSYSTEM"] == "MINGW32"): # cond. and
- print "This test must be run from an MSYS console."
- sys.exit(1)
-
test_dir = './testdir'
if not os.path.isdir(test_dir):
print "Test directory %s not found." % test_dir
|
https://bitbucket.org/pygame/pygame/diff/configtest/test_config_msys.py?diff2=08a04cbb47d5&at=default
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
Internet Development Glossary
A
- alpha
Pertaining to a pixel's opacity. A pixel with the maximum alpha value is opaque, one with a value of zero is transparent, and one with an intermediate value is translucent.
- alpha blending
In computer graphics, a technique that causes a foreground image to appear partially transparent over a background image. The technique blends the background image with partially transparent pixels in the foreground image by performing a weighted average of the color components of the two images.
- alpha premultipled
The technique of scaling the three color components of a sample by alpha before storing their values. This saves many mathematical steps when alpha blending two images. For the PMARGB32 pixel format, all color values are alpha premultiplied.
- ARGB32
One of the two common pixel formats supported by DirectX. ARGB32 consists of uncompressed alpha, red, green, and blue.
- attached
Physically connected to a system. A device can be installed without being attached.
- authentication data
A.
B
- bilinear
A rendering method used to map a source image to a target image. This method uses the weighted average of the four nearest source pixels to define a target pixel.
- binding source
In data binding, the object from which the value is obtained.
- binding target
In data binding, the object that consumes the value of the binding.
- block-level element
An HTML element that, in general, begins a new line. A block-level element may include other block-level elements or inline elements.
A frequently updated online journal or column.
C
- Canvas
An HTML5 element that is part of the W3C HTML5 specification. This element allows dynamic scriptable rendering of pixels, bitmaps, and 2D shapes such as rectangles, polygons, and ellipses.
- certificate store
A permanent storage where certificates, certificate revocation lists, and certificate trust lists are stored. A certificate store can also be temporary when working with session-based certificates.
- code page
A table that relates the character codes (code point values) used by a program to keys on the keyboard or to characters on the display. This provides support for character sets and keyboard layouts for different countries or regions.
- collection
An object that contains a set of related objects. An object's position in the collection can change whenever a change occurs in the collection; therefore, the position of any specific object in a collection may vary.
- color key
A color used for transparent or translucent effects. An overlay surface is displayed in the region of the primary surface that contains the color key. In video production, color keys are used to combine two video signals. Also called a chroma key.
- combinator
A method used to map a source image to a target image. This method uses the weighted average of the four nearest source pixels to define a target pixel.
- compositing
The process of combining two images to form a new image. The most common compositing operation is an over operation, in which one image is placed over another, taking into account the alpha information of both images.
- cryptographic digest
The result of a one-way hash function that takes a variable-length input string and converts it to a fixed-length output string. This fixed-length output string is probabilistically unique for every different input string and thus can act as a fingerprint of a file. It can be used to determine whether a file was tampered with.
D
- data binding
The process of creating a link between a property and a source. The source can be local or external.
- data space
A series of transforms that operate on data in a specific order.
- descent
The pixel offset of the bottom of an element with respect to its baseline.
- device context
A data structure that defines the graphic objects, their associated attributes, and the graphic modes that affect output on a device.
- DirectX Immediate Mode
A rendering API in which client calls directly cause rendering of graphics objects to the display
- DirectX Retained Mode
A COM-based scene graph API.
- dirty range
An area in a markup container where changes have occurred.
- display pointer
A pointer that marks a position in the markup text of an HTML document during editing in relation to the onscreen position of the rendered page. A display pointer is controlled by an GetDisplayGravity interface. Display pointers work in conjunction with markup pointers.
- dithering
A method to display a range of colors with a limited palette. Each pixel on the source image is represented by multiple pixels (usually a 2x2 square) on the destination image. From a distance, the eye blends the multiple pixels into one color that has more shades than the original palette. The techniques results in a better visual appearance than the removal of low precision bits.
- double bang
A double negation operator (!!), used to force a Boolean return value.
- drawing container
In SVG, an element that groups graphics elements, used as a partial canvas.
E
- execute buffer
A fully self-contained, independent packet of information that describes a 3-D scene. An execute buffer contains a vertex list followed by an instruction stream. The instruction stream consists of operation codes and the data that is operated on by those codes.
F
- F12 Developer Tools
Web development tools that are accessible in Internet Explorer by pressing F12 or clicking Developer Tools on the Tools menu.
- full delegation
A delegation in which a layout behavior requests complete control over the visual layout of elements.
- full PIDL
A PIDL that uniquely describes an object relative to the desktop folder.
H
- hidden helper
An input element used to store information about the state of a webpage.
- HTTP verb
An instruction sent in a request message that notifies an HTTP server of the action to perform on the specified resource. For example, GET specifies that a resource is being retrieved from the server. Common verbs include GET, POST, and HEAD.
I
- icon overlay
An icon that appears on top of the taskbar button icon, used to communicate alerts, notifications, and status. Icon overlays are a feature of pinned sites in Internet Explorer 9.
- inline element
An HTML element that typically does not start a new line, such as EM, FONT, and SPAN.
- inline SVG
SVG markup that is included in the markup for a webpage.
- item identifier list
An ordered sequence of one or more item identifiers. Each item in the list corresponds to a namespace object.
J
L
- literal content
The content inside an element's open and closing tags. This content is not parsed or rendered by MSHTML.
- local registration authority
An intermediary between a software publisher and a CA. The local registration authority can, for example, verify a publisher's credentials before sending them to the CA.
M
- markup container
The staging area for editing an HTML document or HTML fragments.
- master element
An element in a parent document to which a child document is attached. Examples of master elements are input, frame, iframe, or elements created by an element behavior.
- Multi Line Mode
A text box mode that allows data entry on multiple lines.
N
- natural sizing
The default layout and sizing of a collection of elements, as determined by MSHTML.
- nearest neighbor
A rendering method used to map a source image to a target image. This method uses only the nearest source pixel to define a target pixel.
O
- one-off address
A rendering method used to map a source image to a target image. This method uses only the nearest source pixel to define a target pixel.
P
- palettized surface
A surface in which each pixel color is represented by a number that indexes into a color palette.
- PIDL
A pointer to an item identifier list. In the Shell API, namespace objects are usually identified by a PIDL.
- pinned site
A website that's pinned to the taskbar on the Windows desktop, which provides one-click access to the website.
- pixel format
The size and arrangement of pixel color components. The format is specified by the total number of bits used per pixel and the number of bits used to store the red, green, blue, and alpha components of the color of the pixel.
-.
- PMARGB32
One of the two common pixel formats supported by DirectX. PMARG32 uses 8-bit values for alpha, red, green, and blue, for a total of 32-bits per pixel. Each color is alpha premultiplied, which makes alpha blending operations more efficient.
- posterizing
A lookup table operation that reduces the number of colors used in an image.
- procedural surface
A surface with pixel RGB color and alpha values defined dynamically. Only the procedure used to compute the surface is stored in memory.
- pseudo-class
A class used by CSS selectors to allow information that is external to the HTML source, such as whether a link has been visited, to classify elements.
- pseudo-element
An element used by CSS selectors to style typographical rather than structural elements.
R
- relative PIDL
A PIDL that is relative to a root object in the shell namespace other than the desktop folder. This is commonly the parent folder of the item.
- render
To display video, audio, or text content from a file or stream using a software program, such as Windows Media Player.
S
- sample
An indivisible element of an image that is stored in computer memory. The terms pixel and sample are often used interchangeably.
- sample runmap
An array of sample runs that make up an entire image.
- Scalable Vector Graphics
An XML-based language for device-independent description of two-dimensional graphics. SVG images maintain their appearance when printed or when viewed with different screen sizes and resolutions. SVG is a recommendation of the W3C.
- selection anchor
The point at which a selection operation was initiated. This point might be at the visual beginning or end of the selection, depending on how the user made the selection. For example, if the user makes a text selection by moving the mouse pointer from the end of a sentence to its beginning, the selection anchor will be at the end of that sentence.
- selection end
The point at which a selection operation ends. This point might be at the visual beginning or end of the selection, depending on how the user made the selection. For example, if the user makes a text selection by moving the mouse pointer from the end of a sentence to its beginning, the selection end will be at the beginning of that sentence.
- semantic layout
Markup that is based on meaning or intention, as opposed to direct specification of style.
- simple PIDL
A PIDL that is parsed without disk verification.
- site selectable
Capable of being selected as a whole element in the editor. Any element with a height or width attribute, either implicitly or explicitly defined, is site selectable.
- Software Publishing Certificate
A PKCS #7 signed-data object containing X.509 certificates. The certificate contains the verifiable public key of a trusted software publisher.
- storage
A logical grouping of data or objects within a compound file that can contain streams or other subordinate storages. The relationship between storages and streams in a compound file is similar to that of folders and files.
-.
- surface picking
The process of choosing which input surface contributes most to the output surface at a certain position on the output. Surface picking is often used with mouse operations to choose different actions in code that depend on which image you select. In cases where several images are alpha blended on the output point, the transform can use the alpha channel of each input sample to choose the input surface.
- SVG
An XML-based language for device-independent description of two-dimensional graphics. SVG images maintain their appearance when printed or when viewed with different screen sizes and resolutions. SVG is a recommendation of the W3C.
T
- task manager
A generic service used to schedule and run caller-defined tasks. A task manager automatically breaks a transform task into threads and manages their completion, which improves the efficiency of the transform
- threshold filtering
The process of reducing a full-color image to an eight-color image. This is done by setting a value for a threshold, which is the cutoff value for each color component of a sample.
- thumbnail toolbar
A toolbar control that is embedded in a window?s thumbnail preview. Thumbnail toolbars are a feature of pinned sites in Internet Explorer 9.
- ticket
A set of identification data for a security principle, issued by a domain controller for purposes of user authentication.
V
-.
X
- x-ua-compatible header
An HTML header in which the HTTP-EQUIV property has a value of x-ua-compatible, which allows content to specify the document compatibility modes supported by the webpage.
|
https://msdn.microsoft.com/en-us/library/ms537018(d=printer,v=vs.85).aspx
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
Choices Class
Represents a list of alternative items to make up an element in a grammar.
Microsoft.Speech.Recognition.Choices
Assembly: Microsoft.Speech (in Microsoft.Speech.dll) Microsoft.Speech.Recognition.SrgsGrammar namespace.; }
|
https://msdn.microsoft.com/en-us/library/microsoft.speech.recognition.choices.aspx
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
Why have AppName at the start of fully qualified class paths?
(prompted by a late reply on a bug already marked as fixed - custom folder structure)
The reply was about adopting some first letter uppercase vs lowercase convention to differentiate namespace (alias?) from package path in a class path.
I assume what is meant is having 'MyPath.MyClass' as a shorthand reference to 'MyApp.path.to.MyClass', while having used setAlias('MyApp.path.to', 'MyPath')?
The truth is that most languages that rely on packages don't have the AppName as first item in the path. You would use 'path.to.MyClass' and it would be assumed that it matches the folder structure starting from the source or the app folder. The avoidance of a mention of the AppName in the package path makes the code more portable (easier to re-use the same code across projects).
If you use these other languages as inspiration, 'MyApp.path.to.MyClass' could be instead expressed as package: path.to.MyClass and, optional, a second parameter that let you define a baseFolder (or namespace) 'MyApp'. I think you already have some aspect of that baseFolder functionality in, through the 'alias' property. [Wondering, could you have more than one application loaded from an index.html? If not, the appName is completely predictable and therefore of no value as information.]
This would solve the problem without requiring a mix of conventions within a single item. With editors like Sublime Text allowing to refer to folders outside of your project, it is possible to think of building a library of components re-used across applications.
There seems to be an interest from Sencha to attract people from an entreprise background (e.g., Flex). However, small deviations from the usual conventions, like the addition of MyApp at the start of the path easily create major headaches for code structure and re-use in larger applications.
Another aspect of these reverse url conventions for namespaces is how easy or difficult you can make it to re-use community contributed components.
With Sencha, it looks like the practice is to release official plugins and name them something like Ext.layout.AccordionLayout or Ext.ux.touch.Rating. Implicit to this naming convention is the expectation that there can ever be one trustworthy accordion layout or rating component.
To better support fully qualified namespaces (without AppName at the start of the path) could more explicitly encourage average Joe's contributions, with com.average-joe.Rating happily co-existing in the ecosystem with Ext.ux.touch.Rating.
The reverse url is widely adopted in communities that show a very strong ecosystem, with many quality libraries and components contributed on github and google code.
The problem then is to balance out the benefits and costs of this approach. Custom paths give more freedom but make the code more bloated and more difficult to read if you have to write fully qualified class names everywhere. Javascript doesn't really offer any opportunity for an import statement that would let you define the fully qualified class reference first and then access it via the className only later. Alias could help with this but only to some extend. Aliases appear to be global when the import statement defines a reference that is active only in the current class.
- Join Date
- Mar 2007
- Location
- Gainesville, FL
- 38,580
- Vote Rating
- 1136
Sorry it has taken me this long to jump on this thread... forgot about it but I promised I would jump on it so I am
We have used the naming convention Ext.grid.Panel where the root (Ext) and the actual name (Panel) are UpperCamelCase and the inner namespaces (grid) are all lowercase. We have been using this naming convention since all of this was started in 2006.
That being said, you are not force to. If you name your classes something like com.users.list then you actually can. The only thing we do force is that the file structure matches your class name and is case sensitive. We recommend you use the UpperCamelCase but it's really not required.
Now the plugins that aren't actually part of the framework are up to the author of those classes. For instance I used to use the Ext.ux (kind of an unspoken namespace for 3rd party extensions) but have now started to use the Ux namespace instead of Ext.
So to sum it up... we have naming conventions and recommendations but not really really appreciate if you could provide a working example. I have been looking on github and googlecode and couldn't find any example using custom path structure.
If I try to deviate from the app>[controller/view/model] recommended structure, then I end up with a lot of weird outcomes (v2pr3).
(using AppName at the start of the class path)
I can only get the code to work if the controller that is defined in app.js (main script, declared in the index.html file) is in a folder app/controller. If I do something as simple as renaming app/controller to app/controllers (making all necessary changes), the app stops working. The culprit is an attempt to load a view dynamically from a controller, using Ext.ClassManager.get('AppName.AppView').create(); (using AppName.view.AppView makes no difference). I grant you, not an orthodox approach, but the issue is that as soon as you deviate a tiny bit from the app/[controller/view/model] basic structure, other, seemingly unrelated, part of the sencha code starts to break in odd ways.
(not using AppName at the start of the class path, using fully qualified names of the type com.domain.project)
If I use views : [ 'com.domain.project.AppView' ], sencha tries and load the file from app/view/com/domain/project/AppView.js, not com/domain/project/AppView.js.
If you name your classes something like com.users.list then you actually can.
The second aspect of the discussion was how the recommendation of using Ext.ux (or ux) for user contributed content could hinder the development of an ecosystem around Sencha.
If I go to codeCanyon, I get 750 matches for jQuery, One for ext (but nothing to do with ext-js). One match for sencha.
Is the lack of availability of components outside of those provided by the official routes something that you perceive as a problem or not? Yes, you can find some (touch-datepicker, touch-calendar) but they seem to be few, or at the least, difficult to find.
I've updated how this works as of the next release. There's no longer any guesswork based on the case of the package - instead Application will just interrogate Ext.Loader to resolve dependencies. This means that for any external dependencies (things from outside your app) you'll just have to specify the path above your Ext.application.
Hopefully this gives a good balance of flexibility and convenience. The docs are also updated for the next release to show the changes.Ext JS Senior Software Architect
Personal Blog:
Twitter:
Github:
Thanks for that. Patiently waiting for next release, then.
B3 is now live at - hope the updated dependency management better meets your needsExt JS Senior Software Architect
Personal Blog:
Twitter:
Github:
Thanks Ed. I will check it out.
What I am trying to do is figure out a good way to write re-usable widgets. A widget could capture an editable event calendar such as this one:
(ignore the glitches, tested on safari)
And ideally, it should be possible to organize the javascript code in a way similar to this:
A mvc separation is applied, but it is done in each folder, agenda (catalogue of events) and calendar (day views), with class names that are closer to the ones I am used to (Flex conventions) than the ones in Sencha.
Some components used by the widget have the potential to be reused across projects. The ui/MoreOverflow.js class could be used in a todo list application or other list contexts. As such, there are better kept outside of the calendar folder and in a more neutral ui one.
It should be possible to add variations without modifying any file in the original author's namespace. The original Frontier calendar provided a typical monthly view. I was after a week by week one. A day view could be contributed by another user later on, without a need for me to update my own plugin. That other user should be able to link to my library and extend it, by adding and editing files in a separate namespace.
I will give a shot a turning that jQuery plugin into a Sencha one. I will keep track of the problems I come across.
Still getting the same errors. Code that was working with touch 2 pr 3 doesn't run with any of the beta versions, including beta 3.
If, in the main app controller, I specify
Code:
views : [ 'AppName.AppView', ]
Two problems. (1) I don't have a folder app. I renamed it to app1 for the purpose of testing custom path support. (2) AppView is at "app1/AppView.js", not "app/view/AppView/AppView.js" (3) AppView shows up twice in the path ('../AppView/AppView.js') though this happens inconsistently, most of the time, I get 'Failed loading 'app/view/AppName/AppView.js''?
|
https://www.sencha.com/forum/showthread.php?179316-Why-have-AppName-at-the-start-of-fully-qualified-class-paths
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
Source
say / README.rst
print, format, and %, evolved.
Q: It's been forty years since C introduced printf() and the basic formatted printing of positional parameters. Isn't it time for an upgrade?
A: Yes! ZOMG, yes!
say supplements or replaces Python's print statement/function, format function/method, and % string interpolation operator with higher-level facilities:
- Straightforward string formatting with DRY, Pythonic templates that piggyback the built in format() method, formatting syntax, and well-proven underlying engine.
- A single output mechanism compatible with both Python 2.x and Python 3.x.
- Indentation and wrapping (to help stucture output)
- Convenience printing functions for horizontal rules (lines), titles, and vertical whitespace.
Usage
from say import say, fmt, show x = 12 nums = list(range(4)) say("There are {x} things.") say("Nums has {len(nums)} items: {nums}")
yields:
There are 12 things. Nums has 4 items: [0, 1, 2, 3]
say is basically a simpler, nicer recasting of:
print "There are {} things.".format(x) print "Nums has {} items: {}".format(len(nums), nums)
(NB in Python 2.6 one must number each of the {} placeholders--e.g. "Nums has {0} items: {1}"-- in order to avoid a ValueError: zero length field name in format error. Python 2.7 and later assume the placeholders are sequential.)
The more items that are being printed, and the complicated the format invocation, the more valuable having it stated in-line becomes. Note that full expressions are are supported. They are evaluated in the context of the caller.
Printing Where You Like
say() writes to a list of files--by default just sys.stdout. But with one simple configuration call, it will write to different--even multiple--files:
from say import say, stdout say.setfiles([stdout, "report.txt"]) say(...) # now prints to both stdout and report.txt
This has the advantage of allowing you to both capture and see program output, without changing any code (other than the config statement). You can also define your own targeted Say instances:
from say import say, Say, stderr err = say.clone().setfiles([stderr, 'error.txt']) err("Failed with error {errcode}") # writes to stderr, error.txt
Note that stdout and stderr are just convenience aliases to the respective sys equivalents.
Printing When You Like
If you want to stop printing for a while:
say.set(silent=True) # no printing until set to False
Or transiently:
say(...stuff..., silent=not verbose) # prints iff bool(verbose) is True
Of course, you don't have to print to any file. There's a predefined sayer fmt() that works exactly like say() and inherits most of its options, but doesn't print. (The C analogy: say : fmt :: printf : sprintf.)
Indentation and Wrapping
Indentation is a common way to display data hierarchically. say will help you manage it. For example:
say('TITLE') for item in items: say(item, indent=1)
will indent the items by one indentation level (by default, each indent level is four spaces, but you can change that with the indent_str option).
If you want to change the default indentation level:
say.set(indent=1) # to an absolute level say.set(indent='+1') # strings => set relative to current level ... say.set(indent=0) # to get back to the default, no indent
Or you can use a with construct:
with say.settings(indent='+1'): say(...) # anything say() emits here will be auto-indented +1 levels # anything say() emits here, after the with, will not be indented +1
And if you have a lot of data or text to print, you can easily wrap it:
say("This is a really long...blah blah blah", wrap=40)
Will automatically wrap the text to the given width (using Python's standard textwrap module).
While it's easy enough for any print statement or function to have a few space characters added to its format string, it's easy to mistakenly type too many or too few spaces, or to forget to type them in some format strings. And if you're indenting strings that themselves may contain multiple lines, the simple print approach breaks because won't take multi-line strings into account. And it won't be integrated with wrapping.
say, however, simply handles the indent level and wrapping, and it properly handles the multi-line string case. Subsequent lines will be just as nicely and correctly indented as the first one--something not otherwise easily accomplished without adding gunky, complexifying string manipulation code to every place in your program that prints strings.
This starts to illustrate the "do the right thing" philosophy behind say. So many languages' printing and formatting functions a restricted to "outputting values" at a low level. They may format basic data types, but they don't provide straightforward ways to do neat text transformations like indentation that let programmers rapidly provide correct, highly-formatted ouput. Over time, say will provide higher-level formatting options. For now: indentation and wrapping.
Encodings
say() and fmt() try to work with Unicode strings, for example providing them as return values. But character encodings remain a fractious and often exasperating part of IT. When writing formatted strings, say handles this by encoding into utf-8.
If you are using strings containing utf-8 rather than Unicode characters, say may complain. But it complains in the same places the built-in format() does, so no harm done. (Python 3 doesn't generally allow utf-8 in strings, so it's cleaner on this front.)
You can get creative with the encoding: and it returns text in the output encoding. Or set to an actual encoding name, and that will be '-', '=', or parts of the Unicode box drawing character set.
Python 3
Say works virtually the same way in Python 2 and Python 3. This can simplify software that should work across the versions, without all the from __future__ import print_function hassle.
say attempts to mask some of the quirky compexities of the 2-to-3 divide, such as string encodings and codec use.
Alternatives
-
ScopeFormatter.
- The show debug printing functions have been split into a separate package, show.
- Automated multi-version testing is managed with the wonderful pytest and tox. say is now successfully packaged for, and tested against, all late-model verions of Python: 2.6, 2.7, 3.2, and 3.3, as well as PyPy 1.9 (based on 2.7.2).
- say has greater ambitions than just simple template printing. It's part of a larger rethinking of how output should be formatted. show() is an initial down-payment. Stay tuned for more.
- In addition to being a practical module in its own right, say is testbed for options, a package that provides high-flexibility option, configuration, and parameter management.
- The author, Jonathan Eunice or @jeunice on Twitter welcomes your comments and suggestions.
To-Dos
- Provide code that allows pylint to see that variables used inside the say and fmt format strings are indeed thereby used.
Installation
To install the latest version:
pip install -U say
To easy_install under a specific Python version (3.3 in this example):
python3.3 -m easy_install --upgrade say
(You may need to prefix these with "sudo " to authorize installation.)
|
https://bitbucket.org/jeunice/say/src/8ce86e820a72/README.rst
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
django-guardian is implementation of per object permissions [1] as authorization backend which is supported since Django 1.2. It won't work with older Django releases.
Online documentation is available at.
To install django-guardian simply run:
pip install django-guardian
We need to hook django-guardian into our project.
Put guardian into your INSTALLED_APPS at settings module:
INSTALLED_APPS = ( ... 'guardian', )
Add extra authorization backend:
AUTHENTICATION_BACKENDS = ( 'django.contrib.auth.backends.ModelBackend', # default 'guardian.backends.ObjectPermissionBackend', )
Configure anonymous user ID
ANONYMOUS_USER_ID = -1
After installation and project hooks we can finally use object permissions with Django.
Lets start really quickly:
>>> jack = User.objects.create_user('jack', 'jack@example.com', 'topsecretagentjack') >>> admins = Group.objects.create(name='admins') >>> jack.has_perm('change_group', admins) False >>> UserObjectPermission.objects.assign_perm('change_group', user=jack, obj=admins) <UserObjectPermission: admins | jack | change_group> >>> jack.has_perm('change_group', admins) True
Of course our agent jack here would not be able to change_group globally:
>>> jack.has_perm('change_group') False
Replace admin.ModelAdmin with GuardedModelAdmin for those models which should have object permissions support within admin panel.
For example:
from django.contrib import admin from myapp.models import Author from guardian.admin import GuardedModelAdmin # Old way: #class AuthorAdmin(admin.ModelAdmin): # pass # With object permissions support class AuthorAdmin(GuardedModelAdmin): pass admin.site.register(Author, AuthorAdmin)
BSD
|
https://crate.io/packages/django-guardian/
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
Welcome to WebmasterWorld Guest from 54.160.163.163
Forum Moderators: martinibuster
I run a forum in my niche. One of the areas we talk about is SEO. So I posted a thread along the lines of 'My Christmas gift to you'. And I told everyone how to get a free, top quality link. And it was a good link - dmoz type quality. No cost, no obligation, go get it.
Then I suggested they might be a bit embarrassed since I got them such a nice gift, and they didn't get me anything. And no, I don't want to be regifted. So I did a paragraph or two on how they could get me a Christmas gift by giving me a link from their site in return (which has just increased in importance due to this new high quality link I showed them how to get). And I further explained how linking to high quality relevant sites like mine helps them even more as well (gotta make the case of 'what's in it for them'.).
End result, 2 or 3 relevant one way inbound links all from sites in my niche.
|
https://www.webmasterworld.com/link_development/3813582.htm
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
render jsf page in javaS K Jan 8, 2011 11:49 AM
Hi,
I'm not sure whether this is a right forum to ask, I see this is a J2EE design pattern. I wanted to render a JSF(xhtml) page in java class. Does anyone knows about the api or a sample program? I know there is one available in Seam but I'm not using seam in my application.
Thanks in advance
SK
1. render jsf page in javajaikiran pai Jan 8, 2011 12:24 PM (in response to S K)
S K wrote:
Hi,
I'm not sure whether this is a right forum to ask, I see this is a J2EE design pattern.
We have a JSF forum. I've moved this thread there.
2. render jsf page in javaNicklas Karlsson Jan 13, 2011 3:41 AM (in response to S K)
Also interested in this.
3. Re: render jsf page in javaStan Silvert Jan 13, 2011 4:35 PM (in response to Nicklas Karlsson)
Sorry I missed your post before Nicklas.
JSFUnit could certainly do something like that, but since you are only worried about the client-side HTML it would be simpler to just use plain HtmlUnit.
If you have a running JSF application to do the rendering then this is pretty easy with HtmlUnit.
WebClient webClient = new WebClient(); HtmlPage page = (HtmlPage)webClient.getPage("");
If you don't have a running JSF application it gets a little tougher. You could use a mock HttpServletRequest and HtppServletResponse. Then look at the source code for FacesServlet and see how it uses that to create a FacesContext and render the page. FacesServlet.java is pretty short and relatively easy to understand. So you would basically just do what FacesServlet does.
Stan
4. render jsf page in javaNicklas Karlsson Jan 14, 2011 10:50 AM (in response to Stan Silvert)
Technically would like to be able to pass a xhtml page to an asynchronous task or such and it would render the template so we're probably talking mocks. Seam 2 had this construct where it used mocks and swapped out the current FacesContext and replaced the output stream with a collecting BAOS if I remember correctly.
5. render jsf page in javaStan Silvert Jan 14, 2011 2:37 PM (in response to Nicklas Karlsson)
Yea, it wouldn't be that hard. It would make a nice open source project.
Stan
6. render jsf page in javaS K Jan 14, 2011 6:53 PM (in response to S K)
Actually, my need was not for Unit test rather for runtime render a jsf page and send the output as an email, here I can use seam function but I didn't use seam in my project. Meanwhile I used a different approach to render the jsf page using FacesContext class, here the point is that you must run with active facescontext instance.
You can place the below code anywhere in your file,
public String renderView(String template) {
FacesContext faces = FacesContext.getCurrentInstance();
ExternalContext context = faces.getExternalContext();
HttpServletResponse response = (HttpServletResponse)
context.getResponse();
ResponseCatcher catcher = new ResponseCatcher(response);
try {
ViewHandler views = faces.getApplication().getViewHandler();
// render the message
context.setResponse(catcher);
context.getRequestMap().put("emailClient", true);
views.renderView(faces, views.createView(faces, template));
context.getRequestMap().remove("emailClient");
context.setResponse(response);
} catch (IOException ioe) {
String msg = "Failed to render email internally";
faces.addMessage(null, new FacesMessage(
FacesMessage.SEVERITY_ERROR, msg, msg));
return null;
}
return catcher.toString();
}
The ResponseCatcher class which extends HttpServletResponse class,
/*
* To change this template, choose Tools | Templates
* and open the template in the editor.
*/
package test;
import java.io.CharArrayWriter;
import java.io.IOException;
import java.io.PrintWriter;
import java.util.Collection;
import java.util.Locale;
import javax.servlet.ServletOutputStream;
import javax.servlet.http.Cookie;
import javax.servlet.http.HttpServletResponse;
/**
*
* @author SK
*/
public class ResponseCatcher implements HttpServletResponse {
/** the backing output stream for text content */
CharArrayWriter output;
/** a writer for the servlet to use */
PrintWriter writer;
/** a real response object to pass tricky methods to */
HttpServletResponse response;
private ServletOutputStream soStream;
/**
* Create the response wrapper.
*/
public ResponseCatcher(HttpServletResponse response) {
this.response = response;
output = new CharArrayWriter();//loaded
writer = new PrintWriter(output, true);
}
/**
* Return a print writer so it can be used by the servlet. The print
* writer is used for text output.
*/
public PrintWriter getWriter() {
return writer;
}
public void flushBuffer() throws IOException {
writer.flush();
}
public boolean isCommitted() {
return false;
}
public boolean containsHeader(String arg0) {
return false;
}
/* wrapped methods */
public String encodeURL(String arg0) {
return response.encodeURL(arg0);
}
public String encodeRedirectURL(String arg0) {
return response.encodeRedirectURL(arg0);
}
public String encodeUrl(String arg0) {
return response.encodeUrl(arg0);
}
public String encodeRedirectUrl(String arg0) {
return response.encodeRedirectUrl(arg0);
}
public String getCharacterEncoding() {
return response.getCharacterEncoding();
}
public String getContentType() {
return response.getContentType();
}
public int getBufferSize() {
return response.getBufferSize();
}
public Locale getLocale() {
return response.getLocale();
}
public void sendError(int arg0, String arg1) throws IOException {
response.sendError(arg0, arg1);
}
public void sendError(int arg0) throws IOException {
response.sendError(arg0);
}
public void sendRedirect(String arg0) throws IOException {
response.sendRedirect(arg0);
}
/* null ops */
public void addCookie(Cookie arg0) {}
public void setDateHeader(String arg0, long arg1) {}
public void addDateHeader(String arg0, long arg1) {}
public void setHeader(String arg0, String arg1) {}
public void addHeader(String arg0, String arg1) {}
public void setIntHeader(String arg0, int arg1) {}
public void addIntHeader(String arg0, int arg1) {}
public void setStatus(int arg0) {}
public void setStatus(int arg0, String arg1) {}
public void setCharacterEncoding(String arg0) {}
public void setContentLength(int arg0) {}
public void setContentType(String arg0) {}
public void setBufferSize(int arg0) {}
public void resetBuffer() {}
public void reset() {}
public void setLocale(Locale arg0) {}
/* unsupported methods */
public ServletOutputStream getOutputStream() throws IOException {
return soStream;
}
/**
* Return the captured content.
*/
@Override
public String toString() {
return output.toString();
}
public String getHeader(String string) {
return null;
}
public Collection<String> getHeaders(String string) {
return null;
}
public Collection<String> getHeaderNames() {
return null;
}
public int getStatus() {
throw new UnsupportedOperationException("Not supported yet.");
}
}
I did also rendered jsf page which used CDI injected bean in the page.
Thanks
SK
7. render jsf page in javaStan Silvert Jan 14, 2011 7:13 PM (in response to S K)
If you are running in-container, why not use HtmlUnit and just have the two lines of code like I showed earlier? You don't need to be in the context of a unit test to use the HtmlUnit API. HtmlUnit is just a headless browser.
Stan
8. render jsf page in javaNicklas Karlsson Jan 15, 2011 2:08 AM (in response to Stan Silvert)
So you're saying one could have an application scoped JSF hidden somewhere, bootstrapped with a ServletContext-overridden class and then do the virtual-render-trick by faking what ServletContext does (with mock requests)? That way, e.g. MDB:s could use that virtual-JSF?
9. render jsf page in javaStan Silvert Jan 17, 2011 7:06 AM (in response to Nicklas Karlsson)
Yes, it's doable. But again, if there is an app server running somewhere you might as well use HtmlUnit and send real HttpRequests to a real FacesServlet running in a real environment.
Stan
10. render jsf page in javaNicklas Karlsson Jan 17, 2011 7:20 AM (in response to Stan Silvert)
The advantage of the standalone-JSF would perhaps that it could be separetly configurable. And you wouldn't have to trick around with the FacesContext instance. And it could be run in a truly headless mode.
I did a quick run in SE and tried to get it running with a mocked ServletContext (run it a pass through the Mojarra 2 ConfigListener SC initialized event) and then another through the FacesServlet init() but I must have gotten something wrong since the FactoryFinder wasn't that cooperative. Although I think you could work against the lifecycle directly like the FacesServlet does, I think.
|
https://community.jboss.org/message/580939
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
Hi all Please, CC me, i am not subscribed to debian-powerpc. I am the kannel Debian package maintanier. Kannel is a wap and sms gateway. It has the posibility to use ssl to cypher communications between the mobile wap phone and the gateway. Kannel needs threads support in ssl. I have an open bug that stats for not support for multithreading in openssl at some architectures, so, kannel can not be built with ssl support. Powerpc is one of the architectures where kannel fails to be built. I have seen the voltaire.debian.org libssl-dev config and in the /usr/include/openssl/opensslconf.h file there is no support for threads. /* opensslconf.h */ /* WARNING: Generated automatically from opensslconf.h.in by Configure. */ /* OpenSSL was configured with the following options: */ #ifdef OPENSSL_ALGORITHM_DEFINES /* no ciphers excluded */ #endif #ifdef OPENSSL_THREAD_DEFINES *************************************** HERE, not defined THREADS #endif #ifdef OPENSSL_OTHER_DEFINES # ifndef DSO_DLFCN # define DSO_DLFCN # endif # ifndef HAVE_DLFCN_H # define HAVE_DLFCN_H # endif #endif I would like to know if there is a special reason for openssl at powerpc not having threads support. I have mailed the libssl-dev maintainer, Christoph Martin, but i have no response from him. Thanks for your help. -- - RedLibre - --------------------------------------------------
Attachment:
pgp0UkbQrs2Qm.pgp
Description: PGP signature
|
https://lists.debian.org/debian-powerpc/2001/11/msg00335.html
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
Data Provider for SQL Server (SqlClient)
You can develop Microsoft® Windows® CE-based applications that access databases in Microsoft SQL Server™ version 7.0 or later by using the System.Data.SqlClient namespace. System.Data.SqlClient is the .NET Compact Framework Data Provider for SQL Server. This data provider corresponds to the System.Data.SqlClient namespace of the Microsoft .NET Framework.
Like its counterpart, System.Data.SqlClient in the .NET Compact Framework is a collection of classes that can be used to access SQL Server databases with managed code from Windows CE .NET-based devices.
Unless otherwise noted, all objects in the System.Data.SqlClient namespace match the corresponding objects in the System.Data.SqlClient namespace in the .NET Framework. For more information about the classes in this namespace, see the .NET Compact Framework SDK in Microsoft Visual Studio .NET.
Provider Limitations
The following lists limitations and exceptions that apply to Windows CE .NET-based devices and the .NET Compact Framework:
- Unsupported classes
SqlClientPermission and SqlClientPermissionAttribute classes are not supported.
- ConnectionString property
Applications using System.Data.SqlClient on Windows CE-based devices may leverage the Windows authentication protocol, instead of using SQL Server authentication. To do this, the connection string must include the following properties:
In addition, the following ConnectionString properties are not supported: AttachDBFilename, Max Pool Size, Min Pool Size, Connection Lifetime, Connection Reset, Enlist, Pooling, Network Library, and Encrypt.
- ANSI data
ANSI data is supported only for SQL_Latin1_General_CP1_CI_AS collations from an English-based-based device. System.Data.SqlClient generates an error when the code page for an ANSI-to-Unicode conversion is not available.
For information about code pages that are available to a specific Windows CE-based device, contact the device manufacturer.
- Connection pooling
Connection pooling is not supported. A device can only have a small number of connections to an instance of SQL Server at any time.
- Distributed transactions
Distributed transactions are not supported. Transactions cannot span databases or servers. System.Data.SqlClient generates an InvalidOperationException exception during a distributed transaction.
- Net-Library selection
Only TCP/IP connections to an instance of SQL Server are supported. System.Data.SqlClient cannot connect to SQL Server through a device cradle.
- Net-Library encryptions
Encrypted connections to an instance of SQL Server are not supported. If the computer running SQL Server has a Secure Sockets Layer (SSL) certificate installed, the connection will fail.
- Windows authentication
Windows authentication is supported; however, the User ID and Password used for authentication within the Domain Controller must always be specified in the connection string.
For more information, see the System.Data.SqlClient namespace reference in the .NET Compact Framework SDK in Microsoft Visual Studio .NET.
|
https://technet.microsoft.com/en-us/library/aa275613(v=sql.80).aspx
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
XAML-Related CLR Attributes for Custom Types and Libraries
This topic describes the common language runtime (CLR) attributes that are defined by .NET Framework XAML Services. It also describes other CLR attributes that are defined in the .NET Framework that have a XAML-related scenario for application to assemblies or types. Attributing assemblies, types, or members with these CLR attributes provides XAML type system information related to your types. Information is provided to any XAML consumer that uses .NET Framework XAML Services for processing the XAML node stream directly or through the dedicated XAML readers and XAML writers.
Using CLR attributes entails that you are using the overall CLR to define your types, otherwise such attributes are not available. If you use the CLR to define type backing, then the default XAML schema context used by .NET Framework XAML Services XAML writers can read CLR attribution through reflection against backing assemblies.
The following sections describe the XAML-related attributes that you can apply to custom types or custom members. Each CLR attribute communicates information that is relevant to a XAML type system. In the load path, the attributed information either helps the XAML reader form a valid XAML node stream, or it helps the XAML writer produce a valid object graph. In the save path, the attributed information either helps the XAML reader form a valid XAML node stream that reconstitutes XAML type system information; or it declares serialization hints or requirements for the XAML writer or other XAML consumers.
Reference Documentation: AmbientAttribute
Applies to: Class, property, or get accessor members that support attachable properties.
Arguments: None
AmbientAttribute indicates that the property, or all properties that take the attributed type, should be interpreted under the ambient property concept in XAML. The ambient concept relates to how XAML processors determine type owners of members. An ambient property is a property where the value is expected to be available in the parser context when creating an object graph, but where typical type-member lookup is suspended for the immediate XAML node set being created.
The ambient concept can be applied to attachable members, which are not represented as properties in terms of how CLR attribution defines AttributeTargets. The method attribution usage should be applied only in the case of a get accessor that supports attachable usage for XAML.
Reference Documentation: ConstructorArgumentAttribute
Applies to: Class
Arguments: A string that specifies the name of the property that matches a single constructor argument.
ConstructorArgumentAttribute specifies that an object can be initialized by using a non-default constructor syntax, and that a property of the specified name supplies construction information. This information is primarily for XAML serialization. For more information, see ConstructorArgumentAttribute.
Reference Documentation: ContentPropertyAttribute
Applies to: Class
Arguments: A string that specifies the name of a member of the attributed type.
ContentPropertyAttribute indicates that the property as named by the argument should serve as the XAML content property for that type. The XAML content property definition inherits to all derived types that are assignable to the defining type. You can override the definition on a specific derived type by applying ContentPropertyAttribute on the specific derived type.
For the property that serves as the XAML content property, property element tagging for the property can be omitted in the XAML usage. Typically, you designate XAML content properties that promote a streamlined XAML markup for your content and containment models. Because only one member can be designated as the XAML content property, you sometimes have design choices to make regarding which of several container properties of a type should be designated as the XAML content property. The other container properties must be used with explicit property elements.
In the XAML node stream, XAML content properties still produce StartMember and EndMember nodes, using the name of the property for the XamlMember. To determine whether a member is the XAML content property, examine the XamlType value from the StartObject position and obtain the value of ContentProperty.
Reference Documentation: ContentWrapperAttribute
Applies to: Class, specifically collection types.
Arguments: A Type that specifies the type to use as the content wrapper type for foreign content.
ContentWrapperAttribute specifies one or more types on the associated collection type that will be used to wrap foreign content. Foreign content refers to cases where the type system constraints on the type of the content property do not capture all of the possible content cases that XAML usage for the owning type would support. For example, XAML support for content of a particular type might support strings in a strongly typed generic Collection<T>. Content wrappers are useful for migrating preexisting markup conventions into XAML's conception of assignable values for collections, such as migrating text-related content models.
To specify more than one content wrapper type, apply the attribute multiple times.
Reference Documentation: DependsOnAttribute
Applies to: Property
Arguments: A string that specifies the name of another member of the attributed type.
DependsOnAttribute indicates that the attributed property depends on the value of another property. Applying this attribute to a property definition ensures that the dependent properties are processed first in XAML object writing. Usages of DependsOnAttribute specify the exceptional cases of properties on types where a specific order of parsing must be followed for valid object creation.
You can apply multiple DependsOnAttribute cases to a property definition.
Reference Documentation: MarkupExtensionReturnTypeAttribute
Applies to: Class, which is expected to be a MarkupExtension derived type.
Arguments: A Type that specifies the most precise type to expect as the ProvideValue result of the attributed MarkupExtension.
For more information, see Markup Extensions for XAML Overview.
Reference Documentation: NameScopePropertyAttribute
Applies to: Class
Arguments: Supports two forms of attribution:
A string that specifies the name of a property on the attributed type.
A string that specifies the name of a property, and a Type for the type that defines the named property. This form is for specifying an attachable member as the XAML namescope property.
NameScopePropertyAttribute specifies a property that provides the XAML namescope value for the attributed class. The XAML namescope property is expected to reference an object that implements INameScope and holds the actual XAML namescope, its store, and its behavior.
Reference Documentation: RuntimeNamePropertyAttribute
Applies to: Class
Arguments: A string that specifies the name of the run-time name property on the attributed type.
RuntimeNamePropertyAttribute reports a property of the attributed type that maps to the XAML x:Name Directive. The property must be of type String and must be read/write.
The definition inherits to all derived types that are assignable to the defining type. You can override the definition on a specific derived type by applying RuntimeNamePropertyAttribute on the specific derived type.
Reference Documentation: TrimSurroundingWhitespaceAttribute
Applies to: Types
Arguments: None.
TrimSurroundingWhitespaceAttribute is applied to specific types that might appear as child elements within whitespace significant content (content held by a collection that has WhitespaceSignificantCollectionAttribute). TrimSurroundingWhitespaceAttribute is mainly relevant to the save path, but is available in the XAML type system in the load path by examining XamlType.TrimSurroundingWhitespace. For more information, see Whitespace Processing in XAML.
Reference Documentation: TypeConverterAttribute
Applies to: Class, property, method (the only XAML-valid method case is a get accessor that supports an attachable member).
Arguments: The Type of the TypeConverter.
TypeConverterAttribute in a XAML context references a custom TypeConverter. This TypeConverter provides type conversion behavior for custom types, or members of that type.
You apply the TypeConverterAttribute attribute to your type, referencing your type converter implementation. You can define type converters for XAML on classes, structures, or interfaces. You do not need to provide type conversion for enumerations, that conversion is enabled natively.
Your type converter should be able to convert from a string that is used for attributes or initialization text in markup, into your intended destination type. For more information, see TypeConverters and XAML.
Rather than applying to all values of a type, a type converter behavior for XAML can also be established on a specific property. In this case, you apply TypeConverterAttribute to the property definition (the outer definition, not the specific get and set definitions).
A type converter behavior for XAML usage of a custom attachable member can be assigned by applying TypeConverterAttribute to the get method accessor that supports the XAML usage.
Similar to TypeConverter, TypeConverterAttribute existed in the .NET Framework prior to the existence of XAML, and the type converter model served other purposes. In order to reference and use TypeConverterAttribute, you must fully qualify it or provide a using statement for System.ComponentModel. You must also include the System assembly in your project.
Reference Documentation: UidPropertyAttribute
Applies to: Class
Arguments: A string that references the relevant property by name.
Indicates the CLR property of a class that aliases the x:Uid Directive.
Reference Documentation: UsableDuringInitializationAttribute
Applies to: Class
Arguments: A Boolean. If used for the attribute's intended purpose, this should always be specified as true.
Indicates whether this type is built top-down during XAML object graph creation. This is an advanced concept, which is probably closely related to the definition of your programming model. For more information, see UsableDuringInitializationAttribute.
Reference Documentation: ValueSerializerAttribute
Applies to: Class, property, method (the only XAML-valid method case is a get accessor that supports an attachable member).
Arguments: A Type that specifies the value serializer support class to use when serializing all properties of the attributed type, or the specific attributed property.
ValueSerializer specifies a value serialization class that requires more state and context than a TypeConverter does. ValueSerializer can be associated with an attachable member by applying the ValueSerializerAttribute attribute on the static get accessor method for the attachable member. Value serialization is also applicable for enumerations, interfaces and structures, but not for delegates.
Reference Documentation: WhitespaceSignificantCollectionAttribute
Applies to: Class, specifically collection types that are expected to host mixed content, where white space around object elements might be significant for UI representation.
Arguments: None.
WhitespaceSignificantCollectionAttribute indicates that a collection type should be processed as whitespace significant by a XAML processor, which influences the construction of the XAML node stream's value nodes within the collection. For more information, see Whitespace Processing in XAML.
Reference Documentation: XamlDeferLoadAttribute
Applies to: Class, property.
Arguments: Supports two attribution forms types as strings, or types as Type. See XamlDeferLoadAttribute.
Indicates that a class or property has a deferred load usage for XAML (such as a template behavior), and reports the class that enables the deferring behavior and its destination/content type.
Reference Documentation: XamlSetMarkupExtensionAttribute
Applies to: Class
Arguments: Names the callback.
Indicates that a class can use a markup extension to provide a value for one or more of its properties, and references a handler that a XAML writer should call before performing a markup extension set operation on any property of the class.
Reference Documentation: XamlSetTypeConverterAttribute
Applies to: Class
Arguments: Names the callback.
Indicates that a class can use a type converter to provide a value for one or more of its properties, and references a handler that a XAML writer should call before performing a type converter set operation on any property of the class.
Reference Documentation: XmlLangPropertyAttribute
Applies to: Class
Arguments: A string that specifies the name of the property to alias to xml:lang on the attributed type.
XmlLangPropertyAttribute reports a property of the attributed type that maps to the XML lang directive. The property is not necessarily of type String, but must be assignable from a string (this could be accomplished by associating a type converter with the property's type, or with the specific property). The property must be read/write.
The scenario for mapping xml:lang is so that a runtime object model has access to XML-specified language information without specifically processing with an XMLDOM.
The definition inherits to all derived types that are assignable to the defining type. You can override the definition on a specific derived type by applying XmlLangPropertyAttribute on the specific derived type, although that is an uncommon scenario.
The following sections describe the XAML-related attributes that are not applied to types or member definitions, but are instead applied to assemblies. These attributes are pertinent to the overall goal of defining a library that contains custom types to use in XAML. Some of the attributes do not necessarily influence the XAML node stream directly, but are passed on in the node stream for other consumers to use. Consumers for the information include design environments or serialization processes that need XAML namespace information and associated prefix information. A XAML schema context (including the .NET Framework XAML Services default) also uses this information.
Reference Documentation: XmlnsCompatibleWithAttribute
Arguments:
A string that specifies the identifier of the XAML namespace to subsume.
A string that specifies the identifier of the XAML namespace that can subsume the XAML namespace from the previous argument.
XmlnsCompatibleWithAttribute specifies that a XAML namespace can be subsumed by another XAML namespace. Typically, the subsuming XAML namespace is indicated in a previously defined XmlnsDefinitionAttribute. This technique can be used for versioning a XAML vocabulary in a library and to make it compatible with previously defined markup against the earlier versioned vocabulary.
Reference Documentation: XmlnsDefinitionAttribute
Arguments:
A string that specifies the identifier of the XAML namespace to define.
A string that names a CLR namespace. The CLR namespace should define public types in your assembly, and at least one of the CLR namespace types should be intended for XAML usage.
XmlnsDefinitionAttribute specifies a mapping on a per-assembly basis between a XAML namespace and a CLR namespace, which is then used for type resolution by a XAML object writer or XAML schema context.
More than one XmlnsDefinitionAttribute can be applied to an assembly. This might be done for any combination of the following reasons:
The library design contains multiple CLR namespaces for logical organization of run-time API access; however, you want all types in those namespaces to be XAML-usable by referencing the same XAML namespace. In this case, you apply several XmlnsDefinitionAttribute attributes using the same XmlNamespace value but different ClrNamespace values. This is especially useful if you are defining mappings for the XAML namespace that your framework or application intends to be the default XAML namespace in common usage.
The library design contains multiple CLR namespaces, and you want a deliberate XAML namespace separation between the usages of types in those CLR namespaces.
You define a CLR namespace in the assembly, and you want it to be accessible through more than one XAML namespace. This scenario occurs when you are supporting multiple vocabularies with the same codebase.
You define XAML language support in one or more CLR namespaces. For these, the XmlNamespace value should be.
Reference Documentation: XmlnsPrefixAttribute
Arguments:
A string that specifies the identifier of a XAML namespace.
A string that specifies a recommended prefix.
XmlnsDefinitionAttribute specifies a recommended prefix to use for a XAML namespace. The prefix is useful when writing elements and attributes in a XAML file that is serialized by the .NET Framework XAML Services XamlXmlWriter, or when a XAML-implementing library interacts with a design environment that has XAML editing features.
More than one XmlnsPrefixAttribute can be applied to an assembly. This might be done for any combination of the following reasons:
Your assembly defines types for more than one XAML namespace. In this case you should define different prefix values for each XAML namespace.
You are supporting multiple vocabularies, and you use different prefixes for each vocabulary and XAML namespace.
You define XAML language support in the assembly and have a XmlnsDefinitionAttribute for. In this case, you typically should promote the prefix x.
|
https://msdn.microsoft.com/en-us/library/vstudio/ff354959
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
argument patterns, url from . import views urlpatterns = patterns('', url(r'^articles/2003/$', views.special_case_2003), url(r'^articles/(\d{4})/$', views.year_archive), url(r'^articles/(\d{4})/(\d{2})/$', views.month_archive), url(r'^articles/(\d{4})/(\d{2})/(\d+)/$',.
- /articles/2003 would patterns, url from . import views urlpatterns = patterns('', url(r'^articles/2003/$', views.special_case_2003), url(r'^articles/(?P<year>\d{4})/$', views.year_archive), url(r'^articles/(?P<year>\d{4})/(?P<month>\d{2})/$', views.month_archive), url(r'^articles/(?P<year>\d{4})/(?P<month>\d{2})/(?P<day>\d – POST,>\d{4})/$', views.year_archive),
...the year argument to views.year_archive() will be a string, not an integer, even though the \d{4} will only match integer strings.
Specifying defaults for view arguments¶
A convenient trick is to specify default parameters for your views’ arguments. Here’s an example URLconf and view:
# URLconf from django.conf.urls import patterns, url from . import views urlpatterns = patterns('', url(r'^blog/$', views.page), url(r'^blog/page(?P<num>\d+)/$',.
Syntax of the urlpatterns variable¶
urlpatterns should be a Python list, in the format returned by the function django.conf.urls.patterns(). Always use patterns() to create the urlpatterns variable. assigning values to them.404 – See django.conf.urls.handler404.
- handler500 – See django.conf.urls.handler500.
- handler403 – See django.conf.urls.handler403.
- handler400 – See django.conf.urls.handler400.
Passing strings instead of callable objects¶
It is possible to pass a string containing the path to a view rather than the actual Python function object. This alternative is supported for the time being, though is not recommended and will be removed in a future version of Django.
For example, given this URLconf using Python function objects:
from django.conf.urls import patterns, url from mysite.views import archive, about, contact urlpatterns = patterns('', url(r'^archive/$', archive), url(r'^about/$', about), url(r'^contact/$', contact), )
You can accomplish the same thing by passing strings rather than objects:
from django.conf.urls import patterns, url urlpatterns = patterns('', url(r'^archive/$', 'mysite.views.archive'), url(r'^about/$', 'mysite.views.about'), url(r'^contact/$', 'mysite.views.contact'), )
The following example is functionally identical. It’s just a bit more compact because it imports the module that contains the views, rather than importing each view individually:
from django.conf.urls import patterns, url from mysite import views urlpatterns = patterns('', url(r'^archive/$', views.archive), url(r'^about/$', views.about), url(r'^contact/$', views.contact), )
Note that class based views must be imported:
from django.conf.urls import patterns, url from mysite.views import ClassBasedView urlpatterns = patterns('', url(r'^myview/$', ClassBasedView.as_view()), )
The view prefix¶
If you do use strings, it is possible to specify a common prefix in your patterns() call.
Here’s an example URLconf based on the Django overview:
from django.conf.urls import patterns, url urlpatterns = patterns('', url(r'^articles/(\d{4})/$', 'news.views.year_archive'), url(r'^articles/(\d{4})/(\d{2})/$', 'news.views.month_archive'), url(r'^articles/(\d{4})/(\d{2})/(\d+)/$', as:
from django.conf.urls import patterns, url urlpatterns = patterns('news.views', url(r'^articles/(\d{4})/$', 'year_archive'), url(r'^articles/(\d{4})/(\d{2})/$', 'month_archive'), url(r'^articles/(\d{4})/(\d{2})/(\d+)/$', 'article_detail'), )
Note that you don’t put a trailing dot (".") in the prefix. Django puts that in automatically.
Multiple view prefixes¶
In practice, you’ll probably end up mixing and matching views to the point where the views in your urlpatterns won’t have a common prefix. However, you can still take advantage of the view prefix shortcut to remove duplication. Just add multiple patterns() objects together, like this:
Old:
from django.conf.urls import patterns, url urlpatterns = patterns('', url(r'^$', 'myapp.views.app_index'), url(r'^(?P<year>\d{4})/(?P<month>[a-z]{3})/$', 'myapp.views.month_display'), url(r'^tag/(?P<tag>\w+)/$', 'weblog.views.tag'), )
New:
from django.conf.urls import patterns, url urlpatterns = patterns('myapp.views', url(r'^$', 'app_index'), url(r'^(?P<year>\d{4})/(?P<month>[a-z]{3})/$','month_display'), ) urlpatterns += patterns('weblog.views', url include, patterns, url urlpatterns = patterns('', # ... snip ... url(r'^comments/', include('django.contrib.comments.urls')), not by specifying the URLconf Python module defining them as the include() argument but by using directly the pattern list as returned by patterns() instead. For example, consider this URLconf:
from django.conf.urls import include, patterns, url from apps.main import views as main_views from credit import views as credit_views extra_patterns = patterns('', url(r'^reports/(?P<id>\d+)/$', credit_views.report), url(r'^charge/$', credit_views.charge), ) urlpatterns = patterns('', patterns, url urlpatterns = patterns('wiki.views', url(r'^(?P<page_slug>\w+)-(?P<page_id>\w+)/history/$', 'history'), url(r'^(?P<page_slug>\w+)-(?P<page_id>\w+)/edit/$', 'edit'), url(r'^(?P<page_slug>\w+)-(?P<page_id>\w+)/discuss/$', 'discuss'), url(r'^(?P<page_slug>\w+)-(?P<page_id>\w+)/permissions/$', 'permissions'), )
We can improve this by stating the common path prefix only once and grouping the suffixes that differ:
from django.conf.urls import include, patterns, url urlpatterns = patterns('', url(r'^(?P<page_slug>\w+)-(?P<page_id>\w+)/', include(patterns('wiki.views', url(r'^history/$', 'history'), url(r'^edit/$', 'edit'), url(r'^discuss/$', 'discuss'), url(r'^permissions/$', 'permissions'), ))), )
Captured parameters¶
An included URLconf receives any captured parameters from parent URLconfs, so the following example is valid:
# In settings/urls/main.py from django.conf.urls import include, patterns, url urlpatterns = patterns('', url(r'^(?P<username>\w+)/blog/', include('foo.urls.blog')), ) # In foo/urls/blog.py from django.conf.urls import patterns, url urlpatterns = patterns('foo.views', url(r'^$', 'blog.index'), url(r'^archive/$', 'blog.archive'), )
In the above example, the captured "username" variable is passed to the included URLconf, as expected. patterns, url from . import views urlpatterns = patterns('', url(r'^blog/(?P<year>\d, patterns, url urlpatterns = patterns('', url(r'^blog/', include('inner'), {'blogid': 3}), ) # inner.py from django.conf.urls import patterns, url urlpatterns = patterns('', url(r'^archive/$', 'mysite.views.archive'), url(r'^about/$', 'mysite.views.about'), )
Set two:
# main.py from django.conf.urls import include, patterns, url urlpatterns = patterns('', url(r'^blog/', include('inner')), ) # inner.py from django.conf.urls import patterns, url urlpatterns = patterns('', url(r'^archive/$', 'mysite.views.archive', {'blogid': 3}), url(r'^about/$', 'mysite url template patterns, url from . import views urlpatterns = patterns('', #... url(r'^articles/(\d{4})/$',
It’s fairly common to use the same view function in multiple URL patterns in your URLconf. For example, these two URL patterns both point to the archive view:
from django.conf.urls import patterns, url from mysite.views import archive urlpatterns = patterns('', url(r'^archive/(\d{4})/$', archive), url:
from django.conf.urls import patterns, url from mysite.views import archive urlpatterns = patterns('', url(r'^archive/(\d{4})/$', archive, name="full-archive"), url(r'^archive-summary/(\d{4})/$', archive, {'summary': True}, name="arch-summary"), )
With these names in place (full-archive and arch-summary), you can target each pattern individually by using its name:
{% url 'arch-summary' 1945 %} {% url 'full-archive' 2007 %}
Even though both URL patterns refer to the archive view here, using the name parameter to django.conf.urls.url() allows you to tell them apart in templates.
The string used for the URL name can contain any characters you like. You are not restricted to valid Python names.
Note once polls called , patterns, url urlpatterns = patterns('', url(r'^author-polls/', include('polls.urls', namespace='author-polls', app_name='polls')), url(r'^publisher-polls/', include('polls.urls', namespace='publisher-polls', app_name='polls')), )
from django.conf.urls import patterns, url from . import views urlpatterns = patterns('',' %}
Note that reversing in the template requires the current_app be added as an attribute to the template context like this:
def render_to_response(self, context, **response_kwargs): response_kwargs['current_app'] = self.request.resolver_match.namespace return super(DetailView, self).render_to_response(context, **response_kwargs)
If there is no current instance - say, if we were rendering a page somewhere else on the site - 'polls:index' will resolve to the last registered instance of polls. Since there is no default instance (instance namespace of 'polls'), the last instance of polls that
URL namespaces of included URLconfs can be specified in two ways.
Firstly, you can provide the application and instance namespaces as arguments to include() when you construct your URL patterns. For example,:
url(r'^polls/', include('polls.urls', namespace='author-polls', app_name='polls')),
This will include the URLs defined in polls.urls into the application namespace 'polls', with the instance namespace 'author-polls'.
Secondly, you can include an object that contains embedded namespace data. If you include() a list of url() instances, the URLs contained in that object will be added to the global namespace. However, you can also include() a 3-tuple containing:
(<patterns object>, <application namespace>, <instance namespace>)
For example:
from django.conf.urls import include, patterns, url from . import views polls_patterns = patterns('', url(r'^$', views.IndexView.as_view(), name='index'), url(r'^(?P<pk>\d+)/$', views.DetailView.as_view(), name='detail'), ) url(r'^polls/', include((polls_patterns, 'polls', 'author-polls'))),.
Be sure to pass a tuple to include(). If you simply pass three arguments: include(polls_patterns, 'polls', 'author-polls'), Django won’t throw an error but due to the signature of include(), 'polls' will be the instance namespace and 'author-polls' will be the application namespace instead of vice versa.
|
https://docs.djangoproject.com/en/1.7/topics/http/urls/
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
An ASP.NET Framework for Human Interactive Proofs
Stephen Toub
Microsoft Corporation
September 2004
Summary: Stephen Toub introduces concepts involved in Human Interactive Proofs and creates a framework for their incorporation into your ASP.NET sites. (26 printed pages)
Download the MSDNHip.msi sample file.
Contents
Introduction
Reverse Turing Test
Human Interactive Proofs
Bringing HIP to ASP.NET
HipChallenge
ImageHipChallenge
HipValidator
AudioHipChallenge
Does It Work?
Related Books
Introduction
The Web is a dangerous place. Attackers come at your sites constantly from all sides, wielding powerful weapons in attempts to degrade, defile, compromise, shut down, or simply take advantage of your Web presence. You fight back with security patches, input validation checks, encryption, running with least privilege, reducing possible attack surface, and a myriad of other secure coding techniques such as those outlined in Michael Howard and David LeBlanc's excellent treatise on the subject, Writing Secure Code (Microsoft Press, 2003). But there are always new attacks and new angles, so the defenses must evolve as well.
One of the greatest tools in an attacker's arsenal is computing power. That may sound trite, but it's true. The speed at which a computer can send hundreds and thousands of requests to hundreds and thousands of sites is staggering. Often, this is used for attacks that in moderation might not be considered an attack at all. For example, Hotmail provides free e-mail accounts to the general public, something they're happy to do for any user who wants one (as evidenced by the hundreds of millions of registered Hotmail users). However, they're not happy to provide thousands of accounts to an attacker looking to use them to send unsuspecting users boatloads of spam. The problem for Hotmail and other sites in the same predicament is that it can be very hard, if not impossible, to differentiate a browser controlled by a grandmother looking to open an account to correspond with her grandson, and an attacker with a custom application looking to automatically open multiple accounts to send that grandson mail of a different sort. So what's the solution? How does a Web site differentiate human-initiated requests from requests being scripted by a program?
Reverse Turing Test
In 1936, Alan Turing published his now famous paper On Computable Numbers, with an Application to the Entscheidungsproblem in which he presented his idea for Turing machines (). In the early forties, along with Gordon Welchman, he developed a machine that could break the Enigma codes of the Luftwaffe. Truly an amazing guy. But in 1950, Alan Turing fundamentally changed the world of artificial intelligence by proposing what is now known as the Turing Test.
Turing believed that manmade computers would one day have abilities and intelligence rivaling that of humans. After all, the human brain is a form of a computer, albeit one based in biology rather than in silicon. He believed that asking whether computers can think is meaningless, that the true test of intelligence is whether a human could differentiate an artificial intelligence developed by man from man himself. To further his argument, Turing proposed an "imitation game." Put a computer in one room and a human in another, both of which are separated from a third party, a human interrogator, who doesn't know which room contains which contestant. This examiner poses questions to both rooms through a text-based interface, and when satisfied he guesses which room contains the computer and which room contains the human. If he is unable to come to the correct answer more than half the time, the computer passes the test and must be considered as intelligent as its human counterpart.
The Turing Test is based on the problem of a human differentiating between a computer and a human. To solve our problem, in a sense we need the reverse. A computer needs to differentiate between a computer and a human, a task many researchers would agree is significantly more complex. This scenario has formally been named the Reverse Turing Test (RTT), although the term has also been used in some circumstances to describe a similar scenario but where both contestants attempt to be recognized as a computer.
So how does a computer differentiate a human from a computer?
Human Interactive Proofs
In 2000, Yahoo was searching for an answer to this very problem. Their chief scientist, Udi Manber, recruited the help of Carnegie Mellon professor Manual Blum, and with the aid of one of Blum's Ph.D. students, Luis von Ahn, they created CAPTCHA. CAPTCHAs, or more specifically "Completely Automated Public Turing Tests to Tell Computers and Humans Apart," are based on the RTT scenario and are a kind of Human Interactive Proof (HIP), presenting to a user a puzzle that should be easily solvable by a human but difficult for a computer. In essence, they take advantage of the intelligence gap that exists between humans and computers. CAPTCHAs differ from the standard Turing Test in that they're "completely automated," meaning that the questions or puzzles posed to the user come from a computer, and thus must be able to be generated automatically. CAPTCHA-based systems are now in use all over the Web, from large Internet portals like Yahoo and MSN to smaller, individually run personal sites.
In their paper, Ahn and Blum proposed a few different types of CAPTCHA puzzles, including one that renders words in a distorted form (you've probably seen these more often than other kinds), asking the user to enter the presented word, and one that displays one or more pictures of a familiar entity (such as a monkey), also distorted, asking the user to name the contents of the pictures. Examples of these types of puzzles are shown in Figure 1 and Figure 2, respectively. Note that in both cases the distortion is important. In the former case it's necessary to thwart optical character recognition (OCR) software for recognizing the word. In the latter case it's necessary to prevent an attacker from cataloguing images known to be used by the server.
Figure 1. What does this say?
Figure 2. What kind of animal is this?
Microsoft Research has done a fair amount of investigation into these types of systems. For example, Yong Rui and Zicheng Liu have proposed a set of HIP design guidelines that ensure that a HIP puzzle is both secure and usable. They've also designed HIP puzzles based on the recognition of human faces and on the detection of human facial features. An example of such a puzzle is shown in Figure 3. Here, users must be able to locate a certain number of specific facial features within the image, such as four eye corners and two mouth corners. You can read Rui and Liu's paper at.
Figure 3. Find the facial features
There are many scenarios in which these puzzles can be put to good use. As mentioned, they can be an effective way of limiting automatic creation of user accounts. AltaVista successfully used similar systems to block more than 95 percent of automated attempts to add URLs to their search engine. Benny Pinkas and Tomas Sander of Hewlett Packard wrote a paper detailing how these systems could be effectively used on login pages to prevent online dictionary attacks (). CAPTCHAs have been used effectively to prevent comment spam on blogs and to prevent automated voting for online polls. A few companies now provide e-mail spam prevention applications and plug-ins based on this type of technology (for example, the mail server might force the sender of an e-mail to solve a puzzle before allowing the mail to be delivered). With a slight variation on the system developed in this article, puzzles could even be used to ensure that requests to a Web service are initiated by a user and not by an automated attacker.
No HIP implementation exists in ASP.NET out-of-the-box. However, creating such a system requires surprisingly little code.
Bringing HIP to ASP.NET
Good puzzles are central to the success of any CAPTCHA system, and yet the types of puzzles I present in this article (those based on image distortion) are relatively easy to break by someone determined to do so. Suffice it to say, however, that all puzzles will eventually be broken. Recognition research is progressing at such a rate that puzzles currently sufficient for HIP purposes may not be so in the near future, and many in widespread use have already been broken (though even "broken" puzzles can still be very useful on sites that aren't incredibly interesting to attackers). As such, the actual challenge presented to the user, while absolutely important, should initially be secondary to the framework used to deploy and render these puzzles. When the puzzles are later broken, new puzzles can be written and integrated into the existing framework without significant developer efforts related to refactoring the protected sites. Thus, this article focuses on designing a good framework for implementing HIP in ASP.NET.
When breaking down a HIP system into its core elements, one finds three main parts. The first part is the actual challenge, the puzzle presented to the user. Of course, the user must have a way to respond to the challenge, and thus the second part is a method of input whereby a user can answer with his solution. Third, the system must have a way of informing the user of her success or failure in solving the puzzle. Breaking down the system in this fashion makes it easy to craft a good framework for an ASP.NET solution. I've chosen to base mine on the ASP.NET control framework.
The download associated with this article includes two sample puzzle implementations, one based on a visual puzzle and one based on an aural puzzle (note that these puzzles have not been tested against the current state of image and audio recognition systems and exist purely as samples; more on this later). Both of these implementations are actually custom ASP.NET controls, derived from an abstract base class I've created named HipChallenge.
HipChallenge maps to the first of the three parts. All concrete puzzles derive from HipChallenge and can be plugged into an existing system with almost no work from a developer. ASP.NET provides validation controls that make short order of notifying a user when provided input is invalid, so the validation and user notification portion of the system is implemented as a custom ASP.NET validation control, HipValidator. This validator, like all other ASP.NET validators (RegularExpressionValidator, RequiredFieldValidator, and so on) is configured to interact with other controls on the page. In the case of HipValidator, it interacts with two. First, it can be configured to work with any HipChallenge-derived control on the page, validating user input against the puzzle presented by that control. It's also configured to monitor a developer-specified input control, this being the remaining piece of the system. That can be any control into which a user can supply textual input that could be validated against the challenge.
Thus, a page that implements CAPTCHA will have at least three controls: a HipChallenge-derived control that presents the challenge to the user, an input control such as a TextBox that accepts user input, and a HipValidator that coordinates the other two and validates whether the user successfully solved the puzzle.
HipChallenge
HipChallenge is the base class for any control that renders puzzles to the user. A derived class is only responsible for generating the output form of the puzzle, not the challenge content, with everything else handled by HipChallenge or by the related HipValidator control (to be discussed later). Thus, the core functionality of HipChallenge lies in choosing the content (usually a word or phrase) with which to puzzle the user, storing information about that content in a hidden control on the page, and providing a method to check the user's input on postback against the data stored in that hidden control. All of this functionality is implemented in three core methods supplemented by several helpers. The first and the simplest of the three is an override of Control.CreateChildControls. All this method does is create a hidden control on the page into which we can store data later in the page cycle.
protected override void CreateChildControls() { _hiddenData = new HtmlInputHidden(); _hiddenData.EnableViewState = false; Controls.Add(_hiddenData); base.CreateChildControls(); }
The next method and the most important one for the generation of the challenge is an override of OnPreRender and deserves a bit of explanation. It's the responsibility of OnPreRender to choose the challenge text to be displayed to the user, to store information about that word to the
_hiddenData control created in CreateChildControls, and then to pass along the responsibility of generating the puzzle to the derived class. But a problem lies in the second step just defined. I need to store information about the selected word to the client so that on subsequent requests I can validate the client's response, but I can't simply store the selected word itself in plaintext. Why? Because that would make it very easy for an automated application to obtain the word in question, simply by parsing the supplied HTML and searching for the word. Of course, to avoid sending the word to the user as part of the HTML, I could use server-side resources to maintain information for each client, but that could become expensive.
To solve this predicament, one approach is to encrypt the selected word. The encrypted information could then be stored into the hidden field created by CreateChildControls rather than storing the plaintext. As the .NET Framework provides the System.Security.Cryptography namespace complete with a wide-range of supported encryption protocols, adding this layer of protection is straightforward. However, whenever attempting to use encryption to protect secrets, one really needs to think about and examine the types of attacks that could be mounted against the system. As a prime example, I need to ask myself: does encryption really help here? Yes and no. Encrypting the selected word does make it extremely difficult, if not impossible, to determine the chosen word. But does that really prevent all attacks? Of course not. One possible attack against a CAPTCHA solution is to make a bunch of requests, creating a database of the puzzles presented, and allowing one or more people to then later iterate through and solve all of the puzzles in the database. With the puzzles solved, the automated application can resume its attack. It doesn't matter that the text is encrypted; the puzzle itself can still be solved by a human. To get around this attack, I need to ensure that the CAPTCHA is solved within a reasonable amount of time. So, rather than just encrypting the challenge text, an expiration time can be included in the encrypted content. When a user presents his solution to the server, not only is the data decrypted but this expiration time is checked against the current time. An expired solution is no better than an incorrect answer.
This might sound familiar to those of you who have used ASP.NET Forms Authentication. With forms authentication, information about an authenticated user is sent back and forth between the client and the server, usually in cookies. This information is encrypted and can include an expiration date that forces a user to re-authenticate after a predetermined length of time. Fortunately for me, this functionality is exposed through the FormsAuthentication class, specifically its static Encrypt and Decrypt methods, and I can take advantage of this functionality rather than reinventing the wheel. Plausible Encrypt and Decrypt methods are shown below.
internal static string Encrypt(string content, DateTime expiration) { FormsAuthenticationTicket ticket = new FormsAuthenticationTicket( 1, HttpContext.Current.Request.UserHostAddress, DateTime.Now, expiration, false, content); return FormsAuthentication.Encrypt(ticket); } internal static string Decrypt(string encryptedContent) { try { FormsAuthenticationTicket ticket = FormsAuthentication.Decrypt(encryptedContent); if (!ticket.Expired) return ticket.UserData; } catch (ArgumentException) { } return null; }
The Encrypt method creates a new FormsAuthenticationTicket containing the content to be encrypted along with expiration date and time. It then encrypts this ticket using FormsAuthentication.Encrypt and returns the resulting string, which can be embedded directly into the hidden field sent to the client. The Decrypt method decrypts the encrypted content (most likely obtained from either the hidden field on postback or from a query string generated from a derived class). If the ticket decrypts successfully and has not yet expired, the plaintext content is returned.
That's all fine and dandy. Encryption allows me to send the puzzle to the client, eliminating the burden of maintaining server-side resources. And it also allows me to limit how long a puzzle is valid. But there's a much more damaging attack that this doesn't prevent. What if the attacker solves the puzzle manually and then feeds the solution to his application? His automated attack can then use the solution over and over until the puzzle expires, which might not be for seconds or minutes depending on how the control is configured, and such an amount of time could permit a plethora of requests. In fact, to get around this problem, server-side resources are required. It's the only known way to prevent multiple submissions of the same solution. Have you ever bought something on the Web where the checkout page warned you not to click the submit button twice so that you didn't place your order twice? That's usually the result of the site not tracking which orders have already been submitted. A solution used by some of the more robust Web stores is to send to the client a unique identifier associated with the current shopping cart. When the shopping cart is submitted, that identifier is stored server-side indicating that it has already been submitted, and any future submissions including that ID are ignored. We can use that same solution to prevent an attacker from using a given puzzle and solution more than once.
The simplest way given the steps I've already discussed would be to store the encrypted text server-side in addition to sending it to the client. When authentication is performed, that value can be marked as used on the server, and any future authentication requests using that same encrypted text would fail. Of course, if we're going to store data server-side, there's little reason to also send the encrypted text to the client, given that the encrypted data is relatively large and there's no reason to provide an attacker with more information than she needs, even if that information is encrypted (for example, could she determine from the length of the encrypted data how long the puzzle word is?). So, if preventing this attack is important to you (which it probably should be given the scenarios in which HIP is usually used), an easy solution is as follows, and is the one implemented in the associated sample code.
In OnPreRender, I select the challenge text and generate a unique identifier. I then use that identifier to store the selected text server-side (the storage mechanism doesn't matter for the purposes of this explanation. I'm using the ASP.NET Cache, however if you'll be deploying HIP in a Web farm environment, you'll most likely need some form of shared state between the servers, such as a SQL Server database). Instead of encrypting and storing the text to the
_hiddenData control, the ID is stored to
_hiddenData. An attacker gains no information about the selected text from this randomly-generated ID. The text and the ID are then passed to the derived class so that the puzzle can be generated.
protected sealed override void OnPreRender(EventArgs e) { string content = ChooseWord(); Guid id = Guid.NewGuid(); SetChallengeText(id, content, DateTime.Now.AddSeconds(Expiration)); _hiddenData.Value = id.ToString("N"); RenderChallenge(id, content); base.OnPreRender(e); }
SetChallengeText simply stores the content to the ASP.NET Cache using the ID as a key (notice that the expiration concept is still employed here by removing this puzzle text from the cache after the specified expiration delay). Its counterpart GetChallengeText takes an ID and returns the associated challenge text if any can be found.
The last important method on HipChallenge is Authenticate.
internal bool Authenticate(string userData) {
if (_authenticated == true) return _authenticated;
if (userData != null && userData.Length > 0 &&
_hiddenData.Value != null && _hiddenData.Value.Length > 0)
{try { Guid id = new Guid(_hiddenData.Value); string text = GetChallengeText(id); if (text != null && string.Compare(userData, text, true) == 0) { _authenticated = true; SetChallengeText(id, null, DateTime.MinValue); return true; } } catch(FormatException){}
}
return false;
}
This method is called during validation of user input to determine whether the user-entered word matches that used to generate the puzzle. It obtains the challenge ID from the hidden field and passes it to GetChallengeText in order to get the text for the user's puzzle. If the text is found and if it matches the user-supplied solution, authentication succeeds. In order to prevent the same solution from being used for the same ID multiple times, a successful authentication also results in removing the ID and its associated text from the cache. Of course, by doing so this also prevents Authenticate from operating correctly twice in the same request. In order to fix that, the result of Authenticate is cached in the
authenticated private member variable (which is initially false). After the correct
userData has been supplied to Authenticate, any additional calls to Authenticate in the same HTTP request will return true. Since
_authenticated is not static, future HTTP requests (resulting in a new instance of HipChallenge) will still have to authenticate.
HipChallenge also exposes a few additional methods and properties. The Expiration property allows a developer to configure the number of seconds until a puzzle expires (the default is 120, or two minutes). The Words property exposes a StringCollection that should be populated with the domain of words from which puzzles can be rendered. Alternatively, a derived control can override the ChooseWord method in order to further customize how the base class selects the next word for a puzzle. HipChallenge also implements a few protected random number generation methods for retrieving random integers and doubles. All of these methods are wrappers around my RandomNumbers class, which in turn wraps System.Security.Cryptography.RNGCryptoServiceProvider, providing Next and NextDouble methods similar to those exposed by System.Random. As you can probably tell from the namespace, RNGCryptoServiceProvider is a cryptographically-strong pseudo-random number generator, where as Random is not.
ImageHipChallenge
The ImageHipChallenge control presents a visual puzzle to the user. In its current implementation, this is simply distorted text over a gradient background. The control derives from HipChallenge and is declared as follows:
[ToolboxBitmap(typeof(ImageHipChallenge), "msdn.bmp")] [ToolboxData("<{0}:ImageHipChallenge Runat=\"server\"" + "Height=\"100px\" Width=\"300px\" />")] public class ImageHipChallenge : HipChallenge { ... }
The ToolboxBitmapAttribute informs Visual Studio .NET what image I'd like to use in the toolbox for the control (
"msdn.bmp", which is compiled into the assembly as an embedded resource), and the ToolboxDataAttribute tells the designer what markup to generate for the control when it is initially added to a page.
As mentioned, when rendered on a page, ImageHipChallenge needs to generate an image link to ImageHipChallenge.aspx (or to whatever endpoint URL has been configured using the control's RenderUrl property). Two methods are involved in this step. First, the control overrides Control.CreateChildControls in order to add an Image control. This is what will end up rendering the
img tag when the control's Render method is called. Second, it overrides HipChallenge.RenderChallenge in order to properly configure the ImageUrl property of the Image control created in CreateChildControls.
protected sealed override void CreateChildControls() { base.CreateChildControls(); // Make sure that the size of this control has been properly defined. if (this.Width.IsEmpty || this.Width.Type != UnitType.Pixel || this.Height.IsEmpty || this.Height.Type != UnitType.Pixel) { throw new InvalidOperationException( "Must specify size of control in pixels."); } // Create and configure the dynamic image. _image = new System.Web.UI.WebControls.Image(); _image.BorderColor = this.BorderColor; _image.BorderStyle = this.BorderStyle; _image.BorderWidth = this.BorderWidth; _image.ToolTip = this.ToolTip; _image.EnableViewState = false; Controls.Add(_image); } protected sealed override void RenderChallenge(Guid id, string content) { // Generate the link to the image generation handler _image.Width = this.Width; _image.Height = this.Height; _image.ImageUrl = _renderUrl + "?" + WIDTH_KEY + "=" + (int)Width.Value + "&" + HEIGHT_KEY + "=" + (int)Height.Value + "&" + ID_KEY + "=" + id.ToString("N"); }
The base HipChallenge control passes to the RenderChallenge method both the plaintext word content as well as the ID of the challenge. ImageHipChallenge only uses the latter because of its delayed image generation mechanism, but another implementation might use the former or even both. The width and height are appended to the RenderUrl URL along with the challenge ID. This URL is then set as the URL for the Image control, and that's all that is required for this request.
When the browser receives the page's rendering, it'll find an
img tag with a
src attribute that points back to ImageHipChallenge.aspx. As such, my solution needs to handle requests for this endpoint. To do so, I've created ImageHipChallengeHandler, an IHttpHandler that can generate CAPTCHA images based on the width, height, and challenge ID parameters provided on the query string. To configure this handler, all that is required on the part of the developer is to tell ASP.NET that any requests for the specified endpoint should be handled by an instance of ImageHipChallengeHandler, which she can do by modifying the Web.config to include the following:
<httpHandlers> <add verb="*" path="ImageHipChallenge.aspx" type="Msdn.Web.UI.WebControls.ImageHipChallengeHandler, Hip"/> </httpHandlers>
With that in place, any requests for ImageHipChallenge.aspx will be routed to an instance of ImageHipChallengeHandler and handled by its ProcessRequest method, shown here:
public void ProcessRequest(HttpContext context) { // Retrieve query parameters and the challenge text NameValueCollection queryString = context.Request.QueryString; int width = Convert.ToInt32(queryString[ImageHipChallenge.WIDTH_KEY]); if (width <= 0 || width > MAX_IMAGE_HEIGHT) throw new ArgumentOutOfRangeException(ImageHipChallenge.WIDTH_KEY); int height = Convert.ToInt32(queryString[ImageHipChallenge.HEIGHT_KEY]); if (height <= 0 || height > MAX_IMAGE_HEIGHT) throw new ArgumentOutOfRangeException(ImageHipChallenge.HEIGHT_KEY); string text = HipChallenge.GetChallengeText( new Guid(queryString[ImageHipChallenge.ID_KEY])); if (text != null) { // We successfully retrieved the information, so generate // the image and send it to the client. HttpResponse resp = context.Response; resp.Clear(); resp.ContentType = "img/jpeg"; using(Bitmap bmp = GenerateImage( text, new Size(width, height))) { bmp.Save(resp.OutputStream, ImageFormat.Jpeg); } } }
Upon receiving the request, the method obtains the width, height, and challenge ID from the query string. It then uses the ID to retrieve the challenge text, which will succeed only if the ID is valid and if the content hasn't expired from the cache. Assuming text is retrieved, the current HttpResponse is cleared and its ContentType set to "img/jpeg", informing the browser that the content being sent is a JPEG image. A new image is then dynamically generated and saved to the HttpResponse's OutputStream, sending the image to the client. Note that the ContentType and the ImageFormat used in the Image.Save method aren't important as long as they both refer to the same file format. Thus, instead of "img/jpeg" and ImageFormat.Jpeg, I could have used "img/gif" and ImageFormat.Gif.
RNGCryptoServiceProvider is used as a source of randomness when generating the images. First, a new Bitmap is created of the specified width and height and a Graphics surface is created around that Bitmap. Two 24-bit colors are randomly generated and are used as the two endpoint colors for a LinearGradientBrush, which is used to fill the bitmap. The brush is also configured with a random gradient angle from 0 to 360 degrees. With the background drawn, a FontFamily is chosen at random from those available (while I could have selected one at random from those installed on my system using FontFamilies.Families, I chose to hardcode a short list of families for ease of implementation but more so that I wouldn't end up choosing a symbol font that would render text practically impossible for a user to decipher). A font size is chosen based on the space available, and the text is drawn into the center of the image using a new LinearGradientBrush randomized just as was the one used for the background. After the text has been drawn, distortion is added to the image by moving the pixels around in a simple wave-like fashion:
for (int y = 0; y < height; y++) { for (int x = 0; x < width; x++) { int newX = (int)(x + (distortion * Math.Sin(Math.PI * y / 64.0))); int newY = (int)(y + (distortion * Math.Cos(Math.PI * x / 64.0))); if (newX < 0 || newX >= width) newX = 0; if (newY < 0 || newY >= height) newY = 0; b.SetPixel(x, y, copy.GetPixel(newX, newY)); } }
The distortion amount is also chosen randomly. With all of this randomness, the same word will look different every time it's rendered. For example, Figure 4 shows two different renderings of the word "word."
Figure 4. Two random renderings of "word" by ImageHipChallengeHandler
And Figure 5 shows two different renderings of the word "excel."
Figure 5. Two random renderings of "excel" by ImageHipChallengeHandler
HipValidator
HipValidator is the simplest of all of the controls in my solution. It derives from BaseValidator, overriding one method from the base class, EvaluateIsValid, and adding one additional property which allows the user to specify with which HipChallenge the validator is associated (note that it inherits ControlToValidate from BaseValidator, which allows it to be hooked up to an input control).
[TypeConverter(typeof(HipChallengeControlConverter))] [Category("Behavior")] public string HipChallenge { get { return _hipChallenge; } set { _hipChallenge = value; } } private HipChallenge AssociatedChallenge { get { if (HipChallenge == null || HipChallenge.Trim().Length == 0) throw new InvalidOperationException( "No challenge control specified."); HipChallenge hip = NamingContainer.FindControl(HipChallenge) as HipChallenge; if (hip == null) throw new InvalidOperationException( "Could not find challenge control."); return hip; } } protected override bool EvaluateIsValid() { // Get the validated control and its value. If we can get a value, // see if it authenticates with the associated HipChallenge. string controlName = base.ControlToValidate; if (controlName != null) { string controlValue = base.GetControlValidationValue(controlName); if (controlValue != null && ((controlValue = controlValue.Trim()).Length > 0)) { return AssociatedChallenge.Authenticate(controlValue); } } return false; }
EvaluateIsValid simply retrieves the ControlToValidate from the base class and gets its validation value (the validation value is the value of the property specified through the ValidationProperty attribute attached to the control, which in the case of the TextBox is the Text property). This value is then passed to the associated HipChallenge control's Authenticate method, returning the result.
The only other interesting thing to note is the TypeConverter attribute applied to the HipChallenge property. TypeConverters have two related uses. The first is to convert values from one type to another at runtime, for example converting a System.Drawing.Point to and from a string value. The second is to aid in design-time property configuration. For example, the System.ComponentModel.EnumConverter is used automatically for any Enum-typed properties shown in a PropertyGrid. This allows the user to select the value of the enumeration from a drop-down in the grid. In order to aid a developer at design-time using the HipValidator, I've created a special TypeConverter-derived class, HipChallengeControlConverter, that allows the developer to easily select an existing HipChallenge-derived instance on the page. Instead of having to manually type the ID of the control into the box in the PropertyGrid, a drop-down list is shown that lists the IDs of all HipChallenge-derived controls on the page; all the developer has to do is select one from the list. Implementing a custom TypeConverter for this purpose requires little code.
private class HipChallengeControlConverter : ValidatedControlConverter { private object[] GetControls(IContainer container) { ArrayList list = new ArrayList(); foreach(IComponent comp in container.Components) { HipChallenge hip = comp as HipChallenge; if (hip != null) { if (hip.ID != null && hip.ID.Trim().Length > 0) { list.Add(hip.ID); } } } list.Sort(Comparer.Default); return list.ToArray(); } public override StandardValuesCollection GetStandardValues( ITypeDescriptorContext context) { if (context == null || context.Container == null) return null; object [] controls = GetControls(context.Container); if (controls != null) return new StandardValuesCollection(controls); return null; } }
It derives from ValidatedControlConverter (the TypeConverter used for the ControlToValidate property) and overrides the GetStandardValues method. This method needs to return a StandardValuesCollection filled with the string IDs to display in the drop-down list. So all I do is loop through all of the IComponent instances in the
context.Container.Components collection looking for HipChallenge controls and populating the StandardValuesCollection with their IDs. It should be noted that this scenario is made even simpler in ASP.NET 2.0 through the existence of the ControlIDConverter class. By deriving from ControlIDConverter instead of from ValidatedControlConverter, my implementation of HipChallengeControlConverter will then simply look like:
private class HipChallengeControlConverter : ControlIDConverter { protected override bool FilterControl(Control control) { return c is HipChallenge; } }
While this solution was written for ASP.NET 1.1, it can be used in ASP.NET 2.0 without any changes. Figure 6 shows the ImageHipChallenge and HipValidator controls incorporated into the Personal Web Site Starter Kit that is included with the Visual Web Developer 2005 Express Edition Beta.
Figure 6. Login modified to incorporate ImageHipChallenge and HipValidator
In fact, run this solution under ASP.NET 2.0 and the controls developed here will automatically be augmented to support new ASP.NET 2.0 functionality that improves the solution. For example, one problem with validation controls in ASP.NET 1.x is that they have page-wide scope. This means that any control on the page that initiates a postback will cause the validation logic to execute, even if that control has nothing to do with the validator. ASP.NET 2.0 introduces validation groups to solve this problem, allowing a control to select to which validators it is related. This functionality is exposed through the ValidationGroup property on the BaseValidator control as well as on controls that can cause a form to submit, such as Button. When running under ASP.NET 2.0, HipValidator instantly gains this functionality as it derives from BaseValidator.
AudioHipChallenge
AudioHipChallenge uses the text-to-speech (TTS) engine distributed with the Windows operating system to generate a WAV file that plays a spoken challenge to the user. The user is then required to type in the spoken word to pass the test. For an automated attacker to solve the puzzle, it would need to be able to parse the WAV file to extract the spoken word using some form of voice recognition software. Ideally, the WAV would be generated in such a way that makes it difficult for the attacker to do so while still allowing a non-automated user access.
As with ImageHipChallenge, the AudioHipChallenge class works in conjunction with an IHttpHandler, in this case AudioHipChallengeHandler. AudioHipChallenge is used as a control in the page that renders the HTML challenge to the client, whereas AudioHipChallengeHandler generates WAV audio files based on the query string information rendered by AudioHipChallenge.
RenderChallenge is the core method of the AudioHipChallenge control, taking the challenge ID and rendering it to the browser. As mentioned previously, the base HipChallenge class handles the generation of the hidden field to store the encrypted content, so RenderChallenge only has to generate the controls specific to this challenge display.
protected override void RenderChallenge(Guid id, string content) { // Get the url to the audio string url = null; try { // If it's a valid URL, go with it. new Uri(RenderUrl); url = RenderUrl; } catch{} // If a fully-qualified URL wasn't supplied, treat it as relative if (url == null) { string appPath = Page.Request.ApplicationPath; url = Page.Request.Url.GetLeftPart(UriPartial.Authority) + appPath + (appPath.Length > 0 ? "/" : "") + RenderUrl + "?" + ID_KEY + "=" + id.ToString("N"); } // Add the WMP player control to the output string" + "<PARAM name=\"autoStart\" value=\"" + _autoStart + "\">"; Controls.Add(player); // Add a button to play the sound if (_showPlayButton) { Button playButton = new Button(); if (!this.Width.IsEmpty) playButton.Width = this.Width; if (!this.Height.IsEmpty) playButton.Height = this.Height; playButton.Text = Text; playButton.EnableViewState = false; playButton.CausesValidation = false; playButton.Attributes["OnClick"] = wmpId + ".controls.play(); return false;"; Controls.Add(playButton); } }
The HTML rendered to the browser by this method consists of an object tag that creates an embedded client-side Windows Media Player control. The URL referenced by the media player is set according to the value of the AudioHipChallenge.RenderUrl property (it must be a fully-qualified URL for WMP to successfully connect), and it can be configured to play the WAV automatically when the page loads using the AudioHipChallenge.AutoStart property. If the AudioHipChallenge.ShowPlayButton property is true, an additional Button control is rendered that is configured to play the WAV when the button is clicked, allowing the user to hear the challenge as many times as they require.
For the embedded browser control to retrieve this WAV file, AudioHipChallengeHandler must be mapped as the IHttpHandler for all requests for the URL specified in the RenderUrl property. As with ImageHipChallengeHandler, this can be configured in the Web.config for the ASP.NET solution.
<httpHandlers> <add verb="*" path="AudioHipChallenge.aspx" type="Msdn.Web.UI.WebControls.AudioHipChallengeHandler, Hip"/> </httpHandlers>
AudioHipChallengeHandler's implementation of ProcessRequest is very straightforward. The challenge ID is retrieved from the query string and is used to retrieve the challenge text from the cache. If the text is available (meaning that the ID is valid and that the associated content hasn't expired), a temporary file is created to store the WAV. The WAV data is created from the decrypted challenge word and the temporary file is streamed to the client, after which the temporary file is deleted so as not to pollute the file system with unnecessary garbage.
public void ProcessRequest(HttpContext context) { // Get the challenge text string text = HipChallenge.GetChallengeText(new Guid( context.Request.QueryString[AudioHipChallenge.ID_KEY])); if (text != null) { // Get a path for the temporary audio file. FileInfo tempAudio = new FileInfo(Path.GetTempPath() + "/" + "aud" + Guid.NewGuid().ToString("N") + ".wav"); try { // Speak the data to the file SpeakToFile(text, tempAudio); // Send the audio to the client HttpResponse resp = context.Response; resp.ContentType = "audio/wav"; resp.WriteFile(tempAudio.FullName, true); } finally { // Delete the temporary audio file tempAudio.Delete(); } } }
If you look at AudioHipChallengeHandler.SpeakToFile, you'll notice that there's very little code involved in what is actually a complicated task. Fortunately for me, I didn't have to write my own TTS engine and was able to take advantage of the Microsoft-provided speech libraries already available on my system. These libraries are programmatically exposed as a set of COM components that are easily accessed from a .NET client through the wonders of COM interop.
Figure 7. Importing the Microsoft Speech Object Library
Using tlbimp.exe from the .NET Framework SDK or the "Add Reference..." option in Visual Studio .NET, you can import the Microsoft Speech Object Library (sapi.dll) into your own project (by default, the wrapper will be named Interop.SpeechLib.dll), as shown in Figure 7.
SpVoice is the core class necessary for TTS. I first retrieve a list of the voices installed on my system using SpVoice.GetVoices. This method returns an ISpeechObjectTokens collection from which I pick a random voice to be stored into the SpVoice object's Voice property. An SpAudioFormat is then created for use with the SpVoice and is configured to use the GSM610 11kHz mono audio compressor. This compressor creates decently small WAV files, which has the side benefit for our purposes of distorting the generated voice. Finally, an SpFileStream is generated for a disk-based file and the SpVoice.Speak method is used to convert the specified text to speech, writing it to the file.
private void SpeakToFile(string text, FileInfo audioPath) { SpFileStream spFileStream = new SpFileStream(); try { // Create the speech engine and set it to a random voice SpVoice speech = new SpVoice(); ISpeechObjectTokens voices = speech.GetVoices(string.Empty, string.Empty); speech.Voice = voices.Item(NextRandom(voices.Count)); // Set the format type to be heavily compressed. SpAudioFormatClass format = new SpAudioFormatClass(); format.Type = SpeechAudioFormatType.SAFTGSM610_11kHzMono; spFileStream.Format = format; // Open the output stream and speak to it spFileStream.Open(audioPath.FullName, SpeechStreamFileMode.SSFMCreateForWrite, false); speech.AudioOutputStream = spFileStream; speech.Rate = -5; // Ranges from -10 to 10 speech.Speak(text, SpeechVoiceSpeakFlags.SVSFlagsAsync); speech.WaitUntilDone(System.Threading.Timeout.Infinite); } finally { // Close the output file spFileStream.Close(); } }
AudioHipChallenge also provides the SpellWords property, which can be used to force the control to generate an audio file that speaks the spelling of the word rather than its pronunciation. This is done by overriding the HipChallenge.ChooseWord method that selects the next word to be spoken.
protected override string ChooseWord() { // Get a word string word = base.ChooseWord(); // If the user has opted to have words spelled, generate // a string that contains the spelling and return that instead. if (_spellWords) { char [] letters = word.ToCharArray(); StringBuilder sb = new StringBuilder(letters.Length*3); foreach(char letter in letters) { int pos = (int)(Char.ToLower(letter) - 'a'); if (pos >= 0 && pos < 26) { sb.Append(_spelledLetters[pos]); sb.Append("; "); } } return sb.ToString(); } // Otherwise, just return the word else return word; }
The overridden method first calls to the base method to get the next word. If the SpellWords property has been set to true, it splits the word string into its constituent characters and generates a new string with all of the letters separated by semicolons, forcing the TTS engine to speak each letter individually. Unfortunately for my purposes, the TTS engine doesn't pronounce each character as I would have hoped (although the pronunciation it chooses does make sense for certain scenarios). To force it to use the pronunciation I want it to, I created an array of strings that map to the pronunciation of each letter as I'd expect:
"hay" for
'a',
"bee" for
'b', and so on.
private static string [] _spelledLetters = { "hay", "bee", "see", "dee", "ee", "ef", "gee", "haych", "eye", "jay", "kay", "el", "em", "en", "oh", "pee", "queue", "are", "es", "tee", "you", "vee", "double you", "ex", "why", "zee" };
The array has 26 elements, one for each letter, and stores them in alphabetical order. Thus, to retrieve the pronunciation string for a particular letter, all I have to do is subtract 'a' from the lower case version of that letter and I end up with the correct index into the array.
That's it for the implementation of the control itself. From a developer's perspective, using the AudioHipChallenge is very straightforward. An instance of the control is added to a page along with a TextBox into which a user can enter his solution. A HipValidator is added to the page, its ControlToValidate property set to the ID of the TextBox and its HipChallenge property set to the ID of the AudioHipChallenge control. Finally, in the page's Load event handler, words are added to the AudioHipChallenge.Word collection. Simple and easy to use.
This is a simple implementation of an audio-based challenge, and there are certainly plenty of avenues to explore if you want to build something more robust. For example, you could overlay the spoken text on top of a background conversation and add reverb to frustrate even the best speech recognition engines. You should also evaluate how well your users can understand the words being spoken. If you opt to go the route of having individual letters read, evaluate how well your users do with letters that sound similar, such as 'B' versus 'P', and 'D' versus 'T'; you might decide to go with numerical digits instead.
Does It Work?
An Internet search reveals that a fair amount of research is being done into breaking these puzzles, recently resulting in decent success rates. The so-called "EZ-Gimpy" puzzle like the one in use by Yahoo has been broken by researchers with a success rate as high as 93 percent (), while simple TTS puzzles such as the one created in this article have been solved by recognizers almost as well as by humans (). Does that mean we should give up and scrap the whole thing? Probably not. Even if the puzzles can be solved by a computer, they still significantly raise the bar for an attacker. Developers of automated programs to attack a CAPTCHA-protected site will need to incorporate the recognition engines into their applications and scripts, requiring a significant investment of resources from the attacker. In addition, computer-based recognition engines are computationally intensive, also raising the cost of an attack. Eventually these automated solutions could be distributed to "script kiddies" around the Internet, at which point the compromised puzzle could be changed to something more difficult to solve, and the cycle begins again. Like many related problems, it's an arms race between the defenders and the attackers.
Other attacks have been proposed that take advantage of cheap labor instead of attempting to actually break a certain puzzle. After all, if a problem is difficult for a computer to solve, why not pay a human to do it? One might imagine a room full of minimum-wage employees solving these puzzles all day long, but that's not an extremely cost effective solution for an attacker. "Pay" doesn't necessarily involve money, however, and could be based on a barter system. The classic example of this is the attacker setting up an adult Web site, offering free viewing materials to anyone willing to solve a puzzle. The attacker can then transfer the puzzle from the attacked site to the user of the adult site, prompting the user to solve the problem, and then submitting the human-generated solution back to the attacked site. While certainly feasible, this solution can also require significant investments of resources, and unless there is a non-trivial amount of traffic to the adult site, it can be thwarted with some of the defenses discussed in this article, such as putting a short time limit on a puzzle.
Denial of service (DoS) attacks are also an issue when dealing with pages that require significant server resources. Both the image and audio generation logic presented in this article require a fair amount of CPU cycles, so as with any resource intensive Web page an attacker could try to mount a DoS attack by bombarding the ImageHipChallengeHandler and AudioHipChallengeHandler endpoints with requests. Fortunately, there are many plausible solutions to preventing such attacks. One solution would be to cache the puzzles for some period of time such that requests for a puzzle given the same challenge text would yield the same image (you'd want to keep the cache timeout small enough so that other attacks wouldn't benefit). Another solution sometimes used to lessen the server impact of DoS attacks would be to add a random but relatively significant time delay to the Web page's response by doing a Thread.Sleep before any processing happens. A normal user won't mind if a request takes an extra second to return, but an attacker attempting to pin the CPU at 100 percent won't have as easy a time doing so.
As noted earlier, HIP puzzles are very difficult to get right and will be broken eventually. As a result, there is significant churn in implementation as both the puzzles and the attacks progress (all the more reason to build your HIP framework such that new puzzles can be swapped in as needed and with little effort). This means that HIP solutions need to be watched carefully, actively monitored for successful attacks and usability problems. All this points towards centralizing a good solution into a hosted service that your site can then consume. However, until such a service exists, is trustworthy, is time-tested, and is cost-effective, the approaches described in this article can get you and keep you up and running.
Also keep in mind that while attacks against these types of systems exist (they almost always do for any system), real-world deployments have shown CAPTCHAs to be very effective, on both large and small sites, deterring all but the most determined of attackers.
Of course, arguments against the use of such systems aren't necessarily limited to their vulnerabilities. Some people find these puzzles just plain annoying and are dissuaded from using a site that presents them. Before implementing this on your own site, you might consider reading up on some of the work being done to better this area. The paper by Pinkas and Sander mentioned at the beginning of this article discusses some good ideas that can be used to make the user experience more pleasurable, specifically relating to using CAPTCHAs in a login scenario.
No article on CAPTCHA would be complete without mentioning the accessibility restrictions imposed by such a solution. Current CAPTCHA puzzles rely on human senses for solution generation. Visual puzzles require good eyesight, while aural puzzles require good hearing, and obviously a significant portion of humans would be stymied by one or both of these types of puzzles (for a more in-depth analysis, see the W3C's Inaccessibility of Visually-Oriented Anti-Robot Tests, available online at). Even those with good eyesight might have trouble with certain visual puzzles; for example, a puzzle that relies on a user differentiating greens from reds might cause trouble amongst the (approximately) 10 percent of the male population that is colorblind. Allowing the presentation of multiple puzzle types (and allowing the user to choose based on their capabilities) is important for a site to best accommodate as large a portion of the user population as possible. Most Web users do have satisfactory use of at least one required sense so that they could solve either an aural or visual problem, but as computers and the Web become more and more accessible, this won't always be the case. For now, any site that chooses to use CAPTCHAs should allow a user to solve either kind. In the future, one could imagine puzzles that don't involve either of those senses, possibly ones based on smell or on taste, or even puzzles that rely on logic rather than on the presentation of the puzzle itself. Of course, if Turing was correct, eventually computers will reach a state of sophistication where no human could tell a human apart from a computer, at which point all of these systems would become obsolete. That is, unless computers become smarter than humans, at which point maybe a computer could still tell a computer from a human, even if a human couldn't. Food for thought.
Related Books
Stephen Toub is the Technical Editor for MSDN Magazine, for which he also writes the .NET Matters column.
|
https://msdn.microsoft.com/en-us/library/ms972952.aspx
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
sh 0.107
Python subprocess wrapper
Python functions dynamically. ``sh`` helps you write shell scripts in
Python by giving you the good features of Bash (easy command calling, easy
piping) with all the power and flexibility of Python.
```python
from sh import ifconfig
print ifconfig("eth0")
```
``sh`` is not a collection of system commands implemented in Python.
# Getting
$> pip install sh
# Usage
The easiest way to get up and running is to import sh
directly or import your program from sh:
```python
import sh
print sh.ifconfig("eth0")
from sh import ifconfig
print ifconfig("eth0")
```
A less common usage pattern is through ``sh`` Command wrapper, which takes a
full path to a command and returns a callable object. This is useful for
programs that have weird characters in their names or programs that aren't in
your $PATH:
```python
import sh
ffmpeg = sh.Command("/usr/bin/ffmpeg")
ffmpeg(movie_file)
```
The last usage pattern is for trying ``sh`` through an interactive REPL. By
default, this acts like a star import (so all of your system programs will be
immediately available as functions):
$> python sh sh
# resolves to "curl -o page.html --silent"
curl("", o="page.html", silent=True)
# or if you prefer not to use keyword arguments, this does the same thing:
curl("", "-o", "page.html", "--silent")
# resolves to "adduser amoffat --system --shell=/bin/bash --no-create-home"
adduser("amoffat", system=True, shell="/bin/bash", no_create_home=True)
# or
adduser("amoffat", "--system", "--shell", "/bin/bash", "--no-create-home")
```
## Piping
Piping has become function composition:
```python
# sort this directory by biggest file
print sort(du(glob("*"), "-sb"), "-rn")
# print the number of folders and files in /etc
print wc(ls("/etc", "-1"), "-l")
```
## Redirection
``sh``")
```
``sh`` sh
``sh`` is capable of "baking" arguments into commands. This is similar
to the stdlib functools.partial wrapper. An example can speak volumes:
```python
from sh sh import ssh
# calling whoami on the server. this is tedious to do if you're running
# any more than a few commands.
iam1 = ssh("myserver.com", "-p 1393", "whoami")
# wouldn't it be nice to bake the common parameters into the ssh command?
myserver = ssh.bake("myserver.com", p=1393)
print myserver # "/usr/bin/ssh myserver.com -p 1393"
# resolves to "/usr/bin/ssh myserver.com -p 1393 whoami"
iam2 = myserver.whoami()
assert(iam1 == iam2) # True!
```
Now that the "myserver" callable represents a baked ssh command, you
can call anything on the server easily:
`` sh import du
print du("*")
```
You'll get an error to the effect of "cannot access '\*': No such file or directory".
This is because the "\*" needs to be glob expanded:
```python
from sh
``sh`` automatically handles underscore-dash conversions. For example, if you want
to call apt-get:
```python
apt_get("install", "mplayer", y=True)
```
``sh``, ``sh`` raises an exception
based on that exit code. However, if you have determined that an error code
is normal and want to retrieve the output of the command without ``sh`` raising an
exception, you can use the "_ok_code" special argument to suppress the exception:
```python
output = sh.
- Downloads (All Versions):
- 308 downloads in the last day
- 14602 downloads in the last week
- 82111 downloads in the last month
-: sh-0.107.xml
|
https://pypi.python.org/pypi/sh/0.107
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
.
You can map XML,:
namespace SDKSample {
public class ExampleClass : ContentControl {
public ExampleClass() {
...
}
}
}:
<Page x:Class="WPFApplication1.MainPage"
xmlns=""
xmlns:x=""
xmlns:
...
<custom:ExampleControl />
...
</Page> and System.Windows.Controls, to the namespace.
The XmlnsDefinitionAttribute takes two parameters: the XML.
xmlns=""
The Presentation namespace from .NET 3.0 is still allowed, though.
The XAML namespace remains the same:
xmlns=""
|
http://msdn.microsoft.com/en-us/library/ms747086.aspx
|
crawl-002
|
en
|
refinedweb
|
It's about the melody of people connections...
This walkthrough guides you through the steps of exposing the public properties of any object to the Property Browser. Tools windows and document windows expose properties this way. The changes you make to properties through these windows can be reflected in the Property Browser.
· Exposing Properties to the Property Browser
· Exposing Tool Window Properties
· Changing Selection Lists
In this section, you create a simple tool window package and display the public properties of the associated window pane object in the Property Browser.
To expose properties to the Property Browser
1. Create a new Visual Studio Integration Package project named, for example, MyObjectProps. For more information on creating a managed VSPackage, see the How to: Create a VSPackage Using the Visual Studio Integration Package Wizard topic in the VSIP 2005 beta 2 documentation.
The Package Wizard runs.
2. In the Select a Programming Language page, select Visual C#.
3. In the Select VSPackage Options page, select Tool Window.
4. In the Tool Window Options page, change the Window Name to "My Object Properties". Click Finish.
The wizard creates the managed project MyObjectProps and the unmanaged resource-only project MyObjectPropsUI.
5. Open the file MyObjectProps/My Object Properties.cs and add this code to the end of the MyToolWindow class, just after the Dispose method:
base.Dispose( disposing );
}
private ITrackSelection trackSel;
private ITrackSelection TrackSelection {
get {
if (trackSel == null)
trackSel =
GetService(typeof(STrackSelection)) as ITrackSelection;
return trackSel;
}
private SelectionContainer selContainer;
public void UpdateSelection() {
ITrackSelection track = TrackSelection;
if (track != null)
track.OnSelectChange((ISelectionContainer)selContainer);
public void SelectList(ArrayList list) {
selContainer = new SelectionContainer(true, false);
selContainer.SelectableObjects = list;
selContainer.SelectedObjects = list;
UpdateSelection();
public override void OnToolWindowCreated() {
ArrayList listObjects = new ArrayList();
listObjects.Add(this);
SelectList(listObjects);
The TrackSelection property uses GetService to obtain an STrackSelection service, which provides an ITrackSelection interface. The OnToolWindowCreated event handler and SelectList method together create a list of selected objects containing only the tool window pane object itself. The UpdateSelection method tells the Property Browser to display the public properties of the tool window pane.
6. Build and launch the project in debug mode by pressing the keyboard shortcut, F5. This launches Visual Studio Exp.
Note Both versions of Visual Studio are open at this time.
7. From Visual Studio Exp, select the View/Other Windows menu item. Select the My Object Properties window.The window opens and the public properties of the window pane appear in the Property Browser.
8. Click in the Solution Explorer. The window pane properties disappear. Click in the My Object Properties window. The properties reappear.
9. Change the Caption property in the Property Browser to Something Else. The My Object Properties window caption changes accordingly.
In this section, you add a tool window and expose its properties. The changes you make to properties are reflected in the Property Browser.
To expose tool window properties
1. Close Visual Studio Exp.
2. Open the file MyObjectProps/My Object Properties.cs and add the public boolean property IsChecked to the end of the MyToolWindow class, just after the OnToolWindowCreated method
private bool isChecked = false;
[Category("My Properties")]
[Description("MyControl properties")]
public bool IsChecked {
get { return isChecked; }
set {
isChecked = value;
control.checkBox1.Checked = value;
3. In the MyToolWindow constructor, add a this pointer to the MyControl construction:
control = new MyControl(this);
4. Open the file MyControl.cs in code view. Replace the constructor with the following code:
private MyToolWindow pane;
public MyControl(MyToolWindow pane)
{
InitializeComponent();
this.pane = pane;
This gives MyControl access to the MyToolWindow object.
5. Change to design view.
6. Select the button and remove the anchors from the Anchor property. Shrink the button to a reasonable size in the lower left corner of the window. Drag a checkbox from the Toolbox to the upper left corner.
7. Double-click the checkbox.
This creates the checkBox1_CheckedChanged event handler and opens it in the code editor.
8. Replace the checkbox event handler with the following code:
private void checkBox1_CheckedChanged(object sender, EventArgs e)
pane.IsChecked = checkBox1.Checked;
pane.UpdateSelection();
9. Make checkBox1 public:
public CheckBox checkBox1;
10. Add this line to the MyControl constructor:
InitializeComponent();
this.pane = pane;
checkBox1.Checked = pane.IsChecked;
11. Build and launch the project in debug mode by pressing the keyboard shortcut, F5. This launches Visual Studio Exp.
12. From Visual Studio Exp, select the View/Other Windows menu item. Select the My Object Properties window.The window opens and the public properties of the window pane appear in the Property Browser. The IsChecked property appears under the category My Properties.
13. Select the IsChecked property. The description MyControl Properties appears.
14. Click on the checkbox in the My Object Properties window. IsChecked changes to True. Click it again to uncheck it. IsChecked changes to False. Change the value of IsChecked in the Property Browser. The checkbox in My Object Properties window changes to match the new value.
Note If you need to dispose of a property or object displayed in the Property Browser, call OnSelectChange with a null selection container first. After disposing the property or object, you can change to a selection container with updated SelectableObjects and SelectedObjects lists.
In this section, you add a selection list for a simple property class and use the tool window interface to choose which selection list to display.
To change selection lists
2. Open the file MyObjectProps/My Object Properties.cs and add the public class Simple to the beginning of the file, just after the namespace statement:
namespace Vsip.MyObjectProps2
public class Simple {
private string someText = "";
[Category("My Properties")]
[Description("Simple Properties")]
[DisplayName("MyText")]
public string SomeText {
get { return someText; }
set { someText = value; }
}
[Description("Read-only property")]
public string ReadOnly
{
get { return "Hello"; }
An object of type Simple has the single public string property SomeText.
3. Add this code to the end of the MyToolWindow class, just after the IsChecked property:
set { isChecked = value; }
private Simple simpleObject = null;
public Simple SimpleObject {
if (simpleObject == null) simpleObject = new Simple();
return simpleObject;
public void SelectSimpleList() {
listObjects.Add(SimpleObject);
public void SelectThisList() {
This creates the singleton property SimpleObject and two methods to switch the Property Browser selection between the window pane and the Simple object.
27. Open the MyControl.cs file in code view. Add these lines of code to the end of the checkbox event handler:
if (pane.IsChecked)
pane.SelectSimpleList();
else pane.SelectThisList();
4. Build and launch the project in debug mode by pressing the keyboard shortcut, F5. This launches Visual Studio Exp.
5. From Visual Studio Exp, select the View/Other Windows menu item. Select the My Object Properties window.The window opens and the public properties of the window pane appear in the Property Browser.
6. Click on the checkbox in the My Object Properties window. The Property Browser displays the Simple object properties SomeText and ReadOnly, and nothing else. Click the checkbox again to uncheck it. The window pane properties reappear.
Note The display name of Some Text is My Text.
Enjoy, Martin Tracy, Programmer Writer
Important note: This material is provided in an 'as is' condition so that you might evaluate it for your own use at your own risk..
PingBack from
|
http://blogs.msdn.com/jim_glass/archive/2005/03/04/385143.aspx
|
crawl-002
|
en
|
refinedweb
|
Public Class FileSyncProvider
Inherits UnmanagedSyncProviderWrapper
Implements IDisposable
Dim instance As FileSyncProvider
public class FileSyncProvider : UnmanagedSyncProviderWrapper, IDisposable
public ref class FileSyncProvider : public UnmanagedSyncProviderWrapper, IDisposable
public class FileSyncProvider extends UnmanagedSyncProviderWrapper implements IDisposable
To synchronize all files and subfolders in a directory, pass the replica ID and root directory to FileSyncProvider(Guid,String), and pass the provider to a SyncAgent object to handle the synchronization session.
By default, synchronization metadata is stored in a Metadata Storage Service database file in the root directory of the replica. To customize the location and name of this file, specify these by using FileSyncProvider(Guid,String,FileSyncScopeFilter,FileSyncOptions,String,String,String,String).
Control over which files and folders are included in the synchronization scope can be accomplished by configuring a FileSyncScopeFilter and passing it to the provider's constructor. The filter contains properties that can be used to exclude a list of files, exclude a list of folders, exclude files and folders based on their attributes, and explicitly include a list of files.
A number of configuration options, FileSyncOptions, are available to control how the provider behaves during synchronization, for example, whether it moves deleted files to the recycle bin or deletes them permanently from the file system.
A variety of events are available to the application that wants to show progress or dynamically skip particular changes during the session.
The provider can be put into preview mode by setting PreviewMode to true before starting synchronization. While in preview mode, the provider will perform all actions as if a real synchronization session is occurring, including firing all events. However, the provider will not actually apply any changes to the destination replica.
Concurrent synchronization operations to the same file store are not supported. If another provider instance was previously initialized with the same replica (that is, the same values for directory path and metadata file path), but has not yet been released, the constructor will throw a ReplicaMetadataInUseException from the metadata store.
|
http://msdn.microsoft.com/en-us/library/microsoft.synchronization.files.filesyncprovider.aspx
|
crawl-002
|
en
|
refinedweb
|
!:
Here is the final trick. And those of you who have read the series have waited a while for this final post. (Sorry for the delay.) Programming against an interface gives you the consistent programming model across the various distributed platforms but abstracting the code that creates the proxy is also critical to hide the .NET Remotingisms from your client code as well.
Take for example the following line from the previous post:
IHello proxy = (IHello)Activator.GetObject(typeof(IHello), "tcp://localhost/Hello.rem");
The use of the Activator class implies a specific distributed application stack so hiding that from your implementation will ease migration to alternate or future stacks with limited client impact. The same holds true for the hosting code written to register objects and channels but this code is often already isolated in the service startup routines (e.g. Main or OnStart for a service).
The easiest way to abstract away the proxy generation code is to use a basic factory pattern on the client that creates the proxy classes. This has the added advantage of enabling the developer to use pooling or handle lifetime issues but the core value for this conversation is the abstraction of proxy creation.
Here is a simple example:
public static class HelloClientProxyFactory
public static IHello CreateHelloProxy()
return (IHello)Activator.GetObject(typeof(IHello), "tcp://localhost/Hello.rem");
As you migrate forward from .NET Remoting to other platforms (e.g., Indigo) this proxy factory can be updated and deployed to your clients enabling the client programming model to remain entirely unchanged but allow your developers to move to a completely different distributed application stack underneath.
This concludes the Shhh… series but I have more post planned for the migration story from .NET Remoting to WCF (“Indigo”).
Stay tuned.
A little self-promotion never hurt anyone...
Some new MSDN TV content has recently been made available on the new features of .NET Remoting for .NET Framework 2.0. Enjoy!
Matt Tavis shows some new features and code examples in .NET Remoting in .NET Framework 2.0, including the new IpcChannel, the secure TcpChannel, and Version Tolerant Serialization (VTS) to allow authors to version their types without breaking serialization.
Now that you expose the your remote object through a CLR interface, that interface should limit itself to interfaces and serializable types as parameters and return values. This guideline is really just an additional follow-on to rule one about using only CLR interfaces.
Let's expand on the original example from part 1. The original interface from part 1 is below:
public interface IHello{ string HelloWorld(string name);}
This works fine since the the HelloWorld accepts and returns only strings which are serializable. What if we want to add a factory pattern that returns newly created remote objects or make the HelloWorld method take a custom type. For these scenarios we should continue to use interfaces and define serializable data types for passing of information.
First let's add the factory pattern:
public interface IHelloFactory{ IHello CreateHelloService();}
This factory pattern is also an interface and simply returns the original IHello interface. The implementation of this interface would inherit from MBRO and new up a HelloObject but all the clients of this service would continue to use the shared interfaces for programming rather than the implementation classes.
Now we expand HelloWorld to take the new HelloMessage custom data types that is serializable.
public interface IHello{ HelloMessage HelloWorld(HelloMessage message);}
[Serializable]public class HelloMessage{ private string SenderName; private string Text; // property accessors omitted for brevity ...
public HelloMessage(string sender, string text) { ... }}
By sticking to this guidance we continue to deploy only the types necessary to share the contract. Shared interfaces (IHello and IHelloFactory) and serializable data types (HelloMessage) are the only pieces of the contract that consumer of our remote service need ever program against. Even if you move to a different activation, hosting or communications technology these same contract types will be reusable and are not .NET Remoting specific. Upgrading from this model to a future technology will be simple and mechanical since the programming interface will remain constant once an instance of the service is activated and accessible.
The new client code will allow for activation of the factory and then creation and consumption of the IHello service:
IHelloFactory factoryProxy = (IHelloFactory)Activator.GetObject(typeof(IHelloFactory), "tcp://localhost/HelloFactory.rem");IHello helloProxy = factoryProxy.CreateHelloService();HelloMessage returnMessage = helloProxy.HelloWord(new HelloMessage("Me", "Hello"));
In Part 1 it was suggested to use only interfaces to program against your remoted object. Another consideration for enforcing the correct usage of your remote object is to limit access to the implementation except through the interface that you defined for it.
You can do this by marking the implementation internal so that it is cannot be constructed directly except within the same assembly.
Consider the following changed code from Part 1:
Server code:internal class HelloObject : MarshalByRefObject, IHello{ string IHello.HelloWorld(string name) { return "Hello, " + name; }}
Shared code:public interface IHello{ string HelloWorld(string name);}
The same code on the client will allow for activation of an instance of the HelloObject class:
But this will prevent someone from using the type outside the implementation assembly by constructing the class directly: // this will now fail outside the implementation assemblyHelloObject localObject = new HelloObject();
This may seem a bit draconian and unnecessary but it can enforce two good early development practices:
1. If you plan on "remoting" an object in the future don't treat it as "local" during development. This can lead to performance and scalability issues that start with "But it work fine running locally on my dev box...".
2. Enforce your contract (i.e., the IHello interface) as the only way to program against your remote object. If you are planning on "remoting" an object then enforcing a common programming model, whether local or remote, can help ensure consistency and avoid issues when deploying your remote object later.
Even after following this advice you can access the implementation class locally without remoting it by using the Activator class:
IHello localObject = (IHello)Activator.CreateInstance(Type.GetType("HelloObject"));
This allows local developers to use the implementation class but keeps the programming interface common whether local or remote.
One of the easiest ways to avoid locking yourself into .NET Remoting is to avoid exposing its most infamous type in your contract: MarshalByRefObject (MBRO). To marshal object references in .NET Remoting your type needs to inherit from MBRO but that doesn't mean your contract needs to expose types that inherit from MBRO.
Use CLR Interfaces for the remote objects in your contract.
This isn't really new advice. Ingo Rammer has suggested this in his book as well. Interestingly, those of you following the Indigo programming model will have seen a lot of interface usage so this pattern will carry forward nicely.
Let's take an example:
Server code:public class HelloObject : MarshalByRefObject, IHello{ string IHello.HelloWorld(string name) { return "Hello, " + name; }}
Now you can register HelloObject on the server side but get an instance of the remote object with the following call:
There are some major advantages to this approach:
- Only the interface needs to be shared, which allows for greater flexibility in versioning and deployment- The interface (and hence contract) that has been established can be re-used later as part of the migration to Indigo
There is one disadvantage to this approach:
- Interception of new() is not supported
I will talk about how to get this support back in a future section of this series.
Q&A and updates---
Here are some good questions I got on this post with some of my answers...
Q: Why by exposing a type that inherits from MarshalByRefObject I'll get locked into Remoting? What you said in your post is a good code practice but not necessary.
A: .NET Remoting requires the sharing of types between client and server. Imagine that a new programming model came along that used a new infrastructure class for intercepting object creation and proxy generation (MBRO2). Keeping your interface seperate from your implementation allows you switch from MBRO to MBRO2 without changing the contract with the server or the type that you program against (i.e., the interface). You might have to change activation code but the programming experience against the remoted object can remain constant. Like you say, it isn't required but it is a good practice.
Q: MBRO belongs to System namespace and not Remoting, any time you cross app domain boundaries to call your object you need it's ObjRef so it is not remoting specific to inherit from MBRO.
A: Crossing AppDomain boundaries within a process using MBRO *is* .NET Remoting. It isn't the same exact programming model for activation but it is using the .NET Remoting infrastructure.
So you've heard about SOA and Indigo and the future of distributed application development on .NET. You've even seen the long-running discussion of our guidance on Richard Turner's blog. You are now asking yourself:
But how can I use .NET Remoting today but be prepared for Indigo?
Hopefully, I can answer all of your questions in this multi-part series. The idea is to give some practical guidance with some code snippets to show you how to use .NET Remoting today without painting yourself into tomorrow's corner.
Now that I have your attention...
We are seriously considering deprecating the SoapFormatter in .NET Framework 2.0. It is the nexus of a whole host of serialization issues and implies a promise of interop that it does not and will not live up to. It also does not support generics. Additionally, those of you interested in the new Version Tolerant Serialization features in .NF 2.0 will note that it is not supported on the SoapFormatter. This means that types using VTS to introduce new fields would break when serializing between versions over the SoapFormatter. This is also true for framework types. So if you are using a framework type that adds fields in .NF 2.0 and you do not upgrade both apps using the SoapFormatter to .NF 2.0 then you could see serailization errors occuring simply by upgrading to .NF 2.0 on one side. Before you ask, no I don't have a list of framework types that might break in this scenario but the question is whether requiring switching to the BinaryFormatter is unacceptable.
The obvious solution to this is to switch to the BinaryFormatter. It supports VTS, the serialization of generics and is one of the Cross-AppDomain formatters, where our guidance recommends the usage of .NET Remoting for new applications.
This is our stance as of Beta1 for .NET Framework 2.0.
So the questions is: Who has to have the SoapFormatter in .NF 2.0 and why?
I'll be going to TechEd Europe to give a few talks. If you are interested in any of the following then add them to your schedule:
For anyone interested in talking about .NET Remoting specifically contact me through the TechEd site and I would be happy to meet with you.
I'll be in Amsterdam from Monday to Thursday.
Interested in Web Services features in Whidbey? So am I!
Yasser Shohoud, Software Legend, and I will be giving a talk at TechEd to introduce some our favorite new Web Services features to you at TechEd. The new talk (CTS201) is not even in the Session Catalog yet but never fear Yasser and I will be giving it. It is planned for the Friday at 2:45pm slot so mark your calendars!
We'll be covering Web Services, XmlSerialization and System.Net features to make your Whidbey Web Services faster, easier to write, more interoperable and more flexible. See you there!
Those of you scouring the list of talks at TechEd for in-depth .NET Remoting discussions will be disappointed. .NET Remoting will be touched on in the Connected Systems track (CTS300) by Richard Turner but that will be prescriptive guidance for the usage of .NET Remoting. There isn't a session dedicated to .NET Remoting at this TechEd.
But the good news is that I'll be at TechEd and would be happy to sit down with interested folks and talk through current and future .NET Remoting features! I'll be at TechEd all week and would be happy to talk to you about .NET Remoting.
Just post a response to this entry and I'll see if I can setup and impromptu meeting at TechEd for those interested in .NET Remoting to help you out.
As a first blog entry, I figure introductions are in order. I am Matt Tavis, PM at Microsoft working on the .NET Framework 2.0 (Whidbey) and Indigo. My Whidbey features areas are .NET Remoting, runtime serialization and xml serialization.
|
http://blogs.msdn.com/mattavis/default.aspx
|
crawl-002
|
en
|
refinedweb
|
CCMSetup.exe is the manual installation program for the Advanced Client. SMSMan.exe is the manual installation program for the Legacy Client.
Capinst is used for Logon Script-initiated Client Installation. It requires a server locator point to determine which site the client is assigned to. Capinst.exe then starts CCMSetup.exe or SMSMan.exe, as appropriate.
CCMSetup is recommended because it:
Client.msi is a Windows Installer package containing the Advanced Client software. It can be used to distribute the Advanced Client through Group Policy, but it should not be run manually on the client. Clients installed using Client.msi will experience difficulties with upgrade and repair operations if the version of the MSI file used to install the client is not available when the client is repaired or patched. (Unlike Ccmsetup, Client.msi does not manage a local copy of the correct Client.msi for future repairs of the client.) If you installed the client using Group Policy, using use Advanced Client and Management Point Cleaner (CCMClean.exe) to remove the client is not recommended or supported. Group Policy installation creates registry keys that are not removed by use Advanced Client and Management Point Cleaner and these residual registry keys might complicate future reinstallation of the Advanced Client through Group Policy. If you configure Group Policy to install the Advanced Client, configure the policy to Uninstall this application when it falls out of the scope of management. If you need to remove the advanced client from a computer, change the permissions on the policy so it does not apply to that computer. Removing the Advanced Client software through the software settings in the GPO removes all related registry keys and allows for future reinstallation through Group Policy.
Note Due to a Group Policy limitation, Group Policy cannot be used to apply Hotfixes to Advanced Client components.
For more information about managing Group Policy, see the Help and Support Center. For more information about CCMSetup, see Scenarios and Procedures for Microsoft Systems Management Server 2003: Planning and Deployment.
Determining the version and type of the SMS client software that is installed on a computer is often important during troubleshooting and for other purposes, such as verifying the success of client deployment. The following section shows how you can check the client version and type from the SMS Administrator console or on the client computer.
In the SMS Administrator console, you can determine the client version and type by viewing the properties of computers in collections and queries. The ClientType property is 0 if the client is a Legacy Client and 1 if the client is an Advanced Client. These are properties of the SMS_R_System class and the v_SMS_R_System view. You can use this information when creating queries and reports.
You can determine the client version by opening Control Panel, opening Systems Management, and then clicking the Components tab.
If you need to determine the client version by using a script or any other programmatic method, you can locate the client version in the following registry key:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SMS\Client\Client Components\ SMS Client Base Components\Installation Properties|Installed Version
On the Advanced Client, this client's version registry key value is set to 99.9.9999.9999. This value ensures that the Advanced Client software is never overwritten by the Legacy Client software. To determine the client's software version, you can check Windows Management Instrumentation (WMI). The client's software version is stored in the ClientVersion property of the SMS_Client class in the root\CCM namespace.
At a client, you can determine the client type by the SMS client installation directory. If a %Windir%\MS\SMS directory exists, then the client is a Legacy Client. If a %Windir%\System32\CCM\Clicomp directory exists, then the client is an Advanced Client. Also, Systems Management in Control Panel on the Advanced Client has an Actions tab, which the Legacy Client does not have.
Common client versions for SMS 2003 are listed below.
Systems Management Server 2003 (No SP)
2.50.2726.0018
Systems Management Server 2003 (SP 1)
2.50.3174.1018
Systems Management Server 2003 (SP 2)
2.50.4160.2000
No. Advanced Clients can run in Windows NT 4.0 domains. Active Directory is required for advanced security mode. Active Directory schema extensions are required for global roaming. Active Directory with schema extensions is also required if you want clients to automatically detect the server locator points and management points without generating WINS traffic.
For more information about Advanced Clients, see "Appendix E: Designing Your SMS Sites and Hierarchy" in Scenarios and Procedures for Microsoft Systems Management Server 2003: Planning and Deployment on Microsoft TechNet.
When using CCMSETUP to install the Advanced Client, you can use the CCMDEBUGLOGGING switch to enable debug logging. The default for the CCMLOGLEVEL switch is 1. Changing that setting to 0 when using CCMSETUP enables verbose logging.
To enable debug logging after installation, create the following registry key: HKLM\SOFTWARE\Microsoft\CCM\Logging\debuglogging
To enable verbose logging after installation, change the following value to 0:
HKLM\Software\Microsoft\CCM\Logging\@Global\Loglevel
You might need to change the registry permissions on this key to change these values.
For more information about enabling Windows Installer logging, see article 223300, "How to Enable Windows Installer Logging," in the Microsoft Knowledge Base.
By default, the Advanced Client for 32-Bit clients is installed in the %Windir%\System32\CCM folder. You can change this default by running Ccmsetup.exe with the CCMINSTALLDIR installation property. Regardless of where the Advanced Client software is installed, the Ccmcore.dll file is always installed in the %Windir%\System32 folder. This is done so the SMS Advanced Client programs in Control Panel function properly.
A new Advanced Client that is not configured as a management point will store the client logs at %windir%\System32\CCM\Logs. A new Advanced Client that is configured as a management point will store the client log files in SMS_CCM\Logs. SMS 2003 Legacy Clients still store the client log files at %windir%\MS\SMS\Logs.
The SMS 2003 Advanced Client installation location differs on computers running supported 64-bit operating systems. In this case, Advanced Client installation files are always copied to %Windir%\CCMSetup before installation. SMS 2003 64-bit Advanced Client software is always installed to %Windir%\Syswow64\CCM. You cannot modify this installation location.
For more information about using the advanced client installer, see "Appendix I: Installing and Configuring SMS Clients" in Scenarios and Procedures for Microsoft Systems Management Server 2003: Planning and Deployment on Microsoft TechNet.
The SMS Advanced Client software is not automatically removed under any circumstances. Only a user with administrative credentials on the computer can remove the Advanced Client software. You can manually remove it in two ways. You can use Advanced Client and Management Point Cleaner (CCMClean.exe) from the SMS 2003 Toolkit, which is available for download on the SMS Web site.
You can also run
msiexec /x \\<management point>\smsclient\i386\client.msi.
Secondary sites do support Advanced Clients. However, Advanced Clients cannot be assigned to the secondary site. They are always assigned to the parent primary site, but can reside in the boundaries of the secondary site, taking advantage of any proxy management points and distribution points at the secondary site.
For more information about planning site boundaries and roaming boundaries, see "Appendix E: Designing Your SMS Sites and Hierarchy" in Scenarios and Procedures for Microsoft Systems Management Server 2003: Planning and Deployment on Microsoft TechNet.
Management points must communicate with an SMS site database. Secondary sites do not have their own SMS site database; they use the site database at their parent primary site. The Policy system for Advanced Clients is based off the primary site and the clients can get policy only when assigned to the primary sites.
For more information about planning site boundaries and roaming boundaries, see "Appendix E: Designing Your SMS Sites and Hierarchy" in Scenarios and Procedures for Microsoft Systems Management Server 2003: Planning and Deployment.
Roaming is the ability to move a computer running the SMS Advanced Client from one IP subnet or Active Directory site to another. Roaming always involves an IP address change on the client. In SMS 2.0, clients moving to other sites might have been uninstalled and reinstalled into a new site, or they might have retrieved packages and contacted client access points across slow WAN links. Roaming was developed to help control how mobile computers use the network when communicating with SMS distribution points and management points.
For a Flash demonstration illustrating the concepts and processes of Advanced Client Roaming, see the "Systems Management Server 2003 Product Documentation" page on the SMS Web site.
For more information about roaming and roaming boundaries, see the "Configuration and Operation of Advanced Client Roaming" whitepaper on the Microsoft Download site.
When configuring roaming boundaries, the SMS administrator specifies whether a roaming boundary is a local roaming boundary or a remote roaming boundary. The terms local and remote are designed to be used by the SMS administrator as a way to label well-connected and not well-connected network segments, respectively. If the SMS administrator defines the roaming boundaries in this way, then the following definitions apply:
Local roaming boundary A roaming boundary in which the site distribution points are locally available to the Advanced Client and software packages are available to that client over a well-connected link. Advertisements sent to Advanced Clients specify whether the Advanced Client downloads the package source files from the locally available distribution point before running the program.
Remote roaming boundary A roaming boundary in which the site distribution points are not locally available to the Advanced Client. Advertisements sent to Advanced Clients specify whether the client downloads the software program from a remote distribution before running it, runs the package from a remote distribution point, or does nothing and waits until a distribution point becomes available locally.
As a best practice, specify local roaming boundaries for the well-connected segments of an SMS site (such as over a LAN). Specify remote roaming boundaries for the slow or unreliable network links in your SMS site (such as RAS, a wireless network, a 56 Kbps dial-up connection, or a branch office that is not configured as a separate site).
Remote and local roaming boundaries are considered equivalent for automatic site assignment.
For more information about roaming and roaming boundaries, see the"Configuration and Operation of Advanced Client Roaming"whitepaper on the Microsoft Download site.
If Active Directory is not available, or if the Active Directory schema for SMS is not extended, Advanced Clients can roam only to the lower level sites of their assigned site. This is called regional roaming. In regional roaming, the Advanced Client can roam to lower level sites and still receive software packages from distribution points.
Global roaming allows the Advanced Client to roam to higher level sites, sibling sites, and sites in other branches of the SMS hierarchy, and still receive software packages from distribution points. Global roaming requires Active Directory and the SMS Active Directory schema extensions. Global roaming cannot be performed across Active Directory forests.
Simulate a simple client request to IIS on the management point. First, on the Start menu, Click Run and type http://<management point name>/sms_mp/.sms_aut?mplist. If you see a blank screen instead of an error message, the request is successful. Next, on the Start menu, Click Run and type
http://<management point name>/sms_mp/.sms_aut?mpcert. If the request is successful, you will see a long list of numbers and letters. Finally, run the MPGetPolicy tool from the SMS Toolkit1, available on the Microsoft Download site. If all of these tests fail, verify that your management point has been installed correctly. For more information about verifying the successful installation of a management point, see "Site Systems Frequently Asked Questions" on Microsoft TechNet.
Verify that the client is assigned to the site. By default, the wizard only pushes to clients assigned to the site.
Verify that you have created the appropriate accounts and they have access to all chosen client computers. Client Push Installation requires that you grant administrator rights and permissions to either the SMS Service Account (if the site is running in standard security mode) or Client Push Installation Accounts that you create in the Client Push Installation Properties dialog box in the SMS Administrator console.
To troubleshoot Client Push Installation problems during Advanced Client installation, review the Ccm.log file on the SMS site server, which is located in the SMS\Logs folder. On the client, review the Ccmsetup.log and Client.msi.log file, which is located in %Windir%\System32\Ccmsetup.
Also, to enable Client Push Installation for client computers running Windows XP SP 2, enable File and Print Sharing in the Windows Firewall (formerly known as Internet Connection Firewall, or ICF) configuration on the Windows XP client.
Site-wide Client Push Installation cannot install the SMS client on computers that are running Windows NT 4.0, and are discovered only by Active Directory discovery methods. Instead, you can create a collection and deploy the client to the collection instead of the whole site.
For more information about using site-wide client push installation on Windows NT 4.0 computers, see the SMS 2003 Installation Release Notes. For information about how to configure Windows Firewall on Windows XP SP 2, search for "Windows Firewall" in Help and Support Center.
Yes. It is called capinst.log and is located in the logged in the user’s temp directory (I.e. D:\documents and settings\User1\Local Settings\temp).
Note The Local Settings folder is marked as hidden by default.
No. The management point is not stored on the client in a registry setting that can be manipulated to cause it to be assigned to a management point. Clients use a dynamic process to locate their management point. This is an important feature of the Advanced Client that allows computers to roam to other sites.
Management point lookup occurs periodically, such as when the SMS Agent Host service starts. If you need to force the client to relocate the management point. you can:
If you have extended the Active Directory schema, then the client will use an LDAP query to determine the management point. If the schema has not been extended, the client will perform a WINS record lookup.
This is a known issue. If the upgrade was set to restart the Legacy Client after installation, the client generates status message 10022, which indicates the operation was successful, but a restart of the system is required for the operation to be complete. This message overrides the 10800 message that indicates a successful installation of the Advanced Client.
There are three reports that are currently affected by this condition:
These reports will show that the program completed successfully, but there is a restart pending. Assuming these clients have completed a restart, you can consider them fully installed. You can also add the client version to the detail in the report and use the version number to determine which clients have successfully installed the Advanced Client.
Note You may find a larger-than-expected number of computers reported as Succeeded in Advertisement status messages for a client being upgraded to the Advanced Client. If an Advanced Client successfully upgrades once, it will not rerun the upgrade program. If the Advanced Client package is advertised to more than one collection and the same client is a member of both collections, then that client will send the Program will not rerun status message. This status message moves the client from the Advanced Client installed count to the Succeeded count. If it is not possible to avoid the collection overlap, use client version reports to determine whether upgrades were successful.
Yes. At this time there are two known compatibility issues that require hotfixes and five application compatibility issues caused by the secure configuration of the Windows Firewall (also known as Internet Connection Firewall, or ICF). HotfixesAccessing SMS items in Control Panel Because of restrictions imposed on DCOM with Windows XP SP2, users will not be able to access Run Advertised Programs or Program Download Monitor in Control Panel when using SMS 2003 (no service pack.) Also, the Actions tab of the Systems Management in Control Panel is not accessible. A hotfix is available to correct this problem. The hotfix is included in SMS 2003 SP1. For more information about this hotfix, see article 832862 in the Microsoft Knowledge Base. To successfully deploy this hotfix to the clients using SMS software distribution, you must verify that the countdown feature is disabled on the Advertised Programs Client agent.Downloading packages by using BITS Windows XP SP2 interferes with the Advanced Client’s ability to download packages by using BITS when using SMS 2003 (no service pack.) Downloading policy by using BITS is not affected. This issue is fixed by applying a hotfix to the BITS-enabled distribution points. For more information about this hotfix, see article 832860 in the Microsoft Knowledge Base. The hotfix is included in SMS 2003 SP1.Application compatibility issues and workarounds When you install Windows XP SP 2, the Windows Firewall is enabled by default. The default Windows Firewall settings will interfere with operations of several SMS functions. To modify the programs and services permitted by Windows Firewall:
Remote Control SMS clients running Windows XP SP 2 cannot be remotely managed by using SMS Remote Tools. The recommended best practice is to use Remote Assistance on client computers that support it, such as Windows XP.. Remote Assistance is unavailable when initiated from the SMS Administrator console.Windows Event Viewer, System Monitor and Windows Diagnostics from the SMS Administrator console The SMS Administrator console cannot access Windows Event Viewer or System Monitor on computers running Windows XP SP2. To enable remote access to these features, enable File and Print Sharing in the Windows Firewall configuration on the Windows XP client. There is no workaround at this time to access Windows Diagnostics from the SMS Administrator console.Client Push Installation Client Push Installation fails on client computers running Windows XP SP 2. To enable Client Push Installation, enable File and Print Sharing in the Windows Firewall configuration on the Windows XP client.Queries If you run the SMS Administrator console on a Windows XP SP2, queries will fail the first time.SMS Administrator console and the port TCP 135 to the list of programs and services on the Exceptions tab of Windows Firewall.Advanced users can configure Windows Firewall by using the netsh.exe command line tool. For more information about this tool, search for "Configuring Windows Firewall from the command line" in Help and Support Center. Network administrators can also use Group Policy to configure Windows Firewall settings. For a complete list of Group Policy options, see "Deploying Internet Connection Firewall Settings for Microsoft Windows XP with Service Pack 2" at the Microsoft Download Center. XP SP 2.
|
http://technet.microsoft.com/en-us/cc988013.aspx
|
crawl-002
|
en
|
refinedweb
|
There are many times when you will want to navigate from one page to another and pass values from the first page to the second. For example, you might have a page that prompts users for a name and password. When users submit the form, you want to call another page that authenticates the user.
There are a variety of ways to share information between pages:.
To create shareable values on the source page
The following example shows how you can declare a property called Property1 and sets its value to the value of text box on the page:
Property1
' Visual Basic
Public ReadOnly Property Property1() As String
Get
Return TextBox1.Text
End Get
End Property
// C#
public string Property1
{
get
{
return TextBox1.Text;
}
}
Server
The following example shows how you might call a page called WebForm2 (in the same project) from an event handler:
WebForm2
' Visual Basic
Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click
Server.Transfer("Webform2.aspx")
End Sub
// C#
private void Button1_Click(object sender, System.EventArgs e)
{
Server.Transfer("Webform2.aspx");
}
To get the property values of the first page from the called page, create an instance variable of the source page class. You then assign to it the HTTP object (an instance of the IHttpHandler class), the object that received the original request.
To read property values from the source page in the called page
The following example shows how to declare a variable called sourcepage that is of the type WebForm1:
sourcepage
WebForm1
' Visual Basic
' Put immediately after the Inherits statements
' at the top of the file
Public sourcepage As WebForm1
// C#
// Put immediately after the opening brace of the class
public class WebForm3 : System.Web.UI.Page
{
public WebForm1 sourcepage;
// etc.
Public sourcepage As WebForm1
public WebForm1 sourcepage;
Note You should only perform this logic the first time the page runs (that is, when the page is first called from the source page).
Note Be sure to save the property values (for example, in view state) if you want to use them in page processing other than the first page initialization. For details, see Introduction to Web Forms State Management.
The complete Page_Load handler might look like this:
' Visual Basic
Private Sub Page_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load
If Not Page.IsPostBack Then
sourcepage = CType(Context.Handler, WebForm1)
Label1.Text = sourcepage.Property1
End If
End Sub
// C#
private void Page_Load(object sender, System.EventArgs e)
{
if (!IsPostBack){
WebForm1 sourcepage = (WebForm1) Context.Handler;
Label1.Text = sourcepage.Property1;
}
}
Web Forms State Management
|
http://msdn.microsoft.com/en-us/library/aa713401(VS.71).aspx
|
crawl-002
|
en
|
refinedweb
|
Quite a few years ago, I came to the practical conclusion that I don't know everything there is to know about the world. Sure, it's presumptuous to even start from a place that requires such a conclusion. But such arrogance is a failing of youth, and I know better now. In fact, this reality is only magnified in the Internet age, as I find it virtually impossible to keep up-to-date with the non-stop flow of information, ideas, and opinions available on the web. Even within my modest scope of interests, I have a hard time staying current with the latest developments in Visual Basic, Visual C# and the .NET Framework.
So, over the course of my next few entries I'm going to explore different approaches to using something with which you're likely already familiar: RSS (a/k/a Really Simple Syndication) —a technology that greatly eases the burden of grappling with information overload.
As a quick primer, Wikpedia describes RSS as "short descriptions of web content together with links to the full versions of the content. This information is delivered as an XML file called an RSS feed, webfeed, RSS stream, or RSS channel. In addition to facilitating syndication, RSS allows a website's frequent readers to track updates on the site using a news aggregator." (For the full entry, see RSS on Wikpedia).
To get started, we'll look at a simple RSS feed reader you can build using Visual Web Developer 2005 Express Edition. I'm going to build a control that fetches an RSS feed from a site of interest and organizes that content for display on a web page. Of course, you could also build an RSS feed reader as a Windows application, or take advantage of any one of a number of free RSS client applications available (e.g., RSS Reader or SharpReader) if your aim is to aggregate content for your own personal consumption. Using syndicated content in a Web application has a different purpose, however, as it allows you to extend the content you create yourself and enrich the user experience of visitors to your site.
Before getting into the code, let's first take a look at a snippet of a typical RSS 2.0 file. In this example, the syndication provider is Microsoft's MSDN Web site, and I'm using the feed devoted to Visual Basic content.
<rss xmlns: <channel> <title>MSDN: Visual Basic</title> <link></link> <description>Recently Published Visual Basic Content</description> <language>en-us</language> <pubDate>Thu, 30 Jun 2005 13:01:02 GMT</pubDate> <lastBuildDate>Thu, 30 Jun 2005 13:01:02 GMT</lastBuildDate> <generator>MSDN RSS Service 1.1.0.0</generator> <ttl>1440</ttl> <item> <title>June CTP of Visual Studio 2005 Available to MSDN Subscribers</title> <description>The latest Community Technical Preview of Visual Studio 2005 is now available for download to MSDN subscribers.</description> <link></link> <category domain="msdndomain:ContentType">Announcement</category> <category domain="msdndomain:Subject">.NET development</category> <msdn:headlineImage> </msdn:headlineImage> <msdn:contentType>Announcement</msdn:contentType> <msdn:simpleDate>Jun 27</msdn:simpleDate> <guid isPermaLink="false">Titan_1106</guid> <pubDate>Tue, 28 Jun 2005 02:00:13 GMT</pubDate> </item> <!-- more items follow --> </channel></rss>
A typical feed file is loaded with information, some of which is not included uniformly by all syndication providers (MSDN's headline image, for example, is not standard). At the top of the XML structure is a single <channel> node that includes a title and description you can use to introduce the list of content items. The <channel> node contains many child <item> nodes representing each article of content (including the title of the article, a description, publication date and the link to the full article).
<channel>
<item>
After creating a new Web Site in Visual Web Developer 2005 Express Edition, I added a Web User Control that I'll use to display this information.
(click image to zoom)
One of the features of ASP.NET 2.0 is that it supports both the code-behind model familiar to Visual Studio 2003 developers, and (something new) in-line code which allows you to write all your code within <script> tags in the .aspx or .ascx files. While using the in-line code approach doesn't make any difference in how the ASP.NET pages perform, I prefer the code-behind model and will use it throughout this example. You'll also notice that I'm using Visual Basic to write the code for this application.
<script>
Working in the designer, I added a Repeater control to the control. The Repeater is a data-bound control for displaying data using a custom layout. This is ideal for showing a list of content items from an RSS feed. As shown below, you can use the smart tag menu associated with the control to set the data source for the control at design. However, for this application I'll be setting the data source at runtime using the RSS feed.
Switching to the code-behind file for the .ascx file (RSSList.ascx.vb), I added Imports statements for the namespaces containing the classes necessary to acquire the RSS feed and store it to a local resource that will bind to the Repeater.
Imports
Visual C#
using System.Data;using System.Net;
Visual Basic
Imports System.NetImports System.Data
In addition to using the event handler for the control's Page Load event, I also added a private helper function called RefreshFeed. This gets the RSS data and returns it as a DataSet to the Page Load event handler. In the RefreshFeed function, the first line of code creates an instance of an HttpWebRequest object using the shared Create function of the WebRequest class.
HttpWebRequest rssFeed = (HttpWebRequest)WebRequest.Create( "");
Dim rssFeed As HttpWebRequest =_ DirectCast(WebRequest.Create(""),_ HttpWebRequest)
As an input argument to the Create function, the code passes the target URL of the RSS feed. The function returns an HttpWebRequest object, which is a special type of WebRequest object that supports additional properties and methods for interacting with servers using HTTP. In this case, my needs are very simple for HttpWebRequest; the code first gets the response from the server (GetResponse) and then, because the response is in XML format, the response stream (GetResponseStream) can be loaded directly into a DataSet using the ReadXml method overload that accepts a Stream as an input argument.
DataSet rssData = new DataSet();rssData.ReadXml(rssFeed.GetResponse().GetResponseStream());
Dim rssData As DataSet = New DataSet()rssData.ReadXml(rssFeed.GetResponse().GetResponseStream())
The ReadXml method automatically infers the schema for the XML data. That means the <channel> and <items> nodes of the source XML data are represented in the DataSet as separate DataTables. In the Page Load event handler, after calling the RefereshFeed method to return a DataSet containing the RSS feed data, the code access the channel and items data in different ways. The second DataTable in the DataSet contains the channel information. To display the feed title and description, the code copies the first and only row from the DataTable into an Object array using the ItemArray property of the Row. Then, because I want to be sure I locate the correct data column, the code assigns the ordinal position of the title and description columns, respectively, to local variables of type Integer. Using these values, the code calls the GetValue method of the Object array to store the value associated with each column to the Friend field defined for the class (I'll use these values in the Repeater control).
<items>
object[] channelItems = rssData.Tables[1].Rows[0].ItemArray;int titleColumn = rssData.Tables[1].Columns["title"].Ordinal;int descriptionColumn = rssData.Tables[1].Columns["description"].Ordinal;Title = channelItems.GetValue(titleColumn).ToString();Description = channelItems.GetValue(descriptionColumn).ToString();
Dim channelItems As Object() = rssData.Tables(1).Rows(0).ItemArrayDim titleColumn As Integer = rssData.Tables(1).Columns("title").OrdinalDim descriptionColumn As Integer = rssData.Tables(1).Columns("description").OrdinalTitle = channelItems.GetValue(titleColumn).ToString()Description = channelItems.GetValue(descriptionColumn).ToString()
Next, the code sets the DataSource property of the Repeater control to the DataTable in the DataSet holding the item content from the RSS feed. Finally, the code calls the DatBind method of the Repeater to bind the data source to the control.
Repeater1.DataSource = rssData.Tables(2)Repeater1.DataBind()
In the source view of the RssFeed.ascx control, I added templates for the Repeater control to display the data. In the <HeaderTemplate> the code begins a table with a header element that displays the channel title and description.
<HeaderTemplate>
<HeaderTemplate> <table border=0 <thead> <tr style="font-weight: bold;"> <td><%#Me.Title%></td> </tr> <tr style="font-style: italic;"> <td><%#Me.Description%></td> </tr> </thead></HeaderTemplate>
Similarly, in the <ItemTemplate> the code displays the title of a content item with its associated link, as well as the description of the item.
<ItemTemplate>
<ItemTemplate> <tr bgcolor="LightBlue"> <td> <a target="article" style="text-decoration: none; color: black;" href=<%# DataBinder.Eval(Container.DataItem, "link") %>> <%# DataBinder.Eval(Container.DataItem, "title") %> </a> </td> </tr> <tr bgcolor="Ivory"> <td style="color: CornFlowerBlue;"> <%# DataBinder.Eval(Container.DataItem, "description") %> </td> </tr></ItemTemplate>
As a final and very necessary step, I added an OutputCache directive to at the top of the .ascx page to cache the output of the control for one hour.
<%@ OutputCache Duration="3600" VaryByParam="None" %>
In future articles I'll dig deeper into the possibilities for using syndicated RSS feeds in a Web application. Until then, I encourage you to download the code and try it out for yourself.
If you would like to receive an email when updates are made to this post, please register here
RSS
Hi, very simple and useful for me.
But how can I limit the return of RSS results. I want it to display 5 rss entries. Thanks
Hi,
Thanks alot. I enjoyed from reading your usfull article.
Be succesfull
Nguyen Vu - just simply return only the amount of rows you need i.e. .Rows[4]
Can i ask a DELPHI version it would be a great help.
Could u be ANY more vague about what the hell is going on with this code here? this was a waste of my ti,e...
erm not sure if you have checked this recently but its seem to be broken code, i 'ev been trying to create a simple rss feed in asp.net for a while its taking the piss...
took less time to do it in php what gives?
PingBack from
Thanks for the concise example. Great starting point.
It is a greate article for .net user.
But it needs to open any rss web site not which contains only .xml extensions.
eg. when i wite the program above was not work.
If author gives such actions then it will become very great.
can you please tell me, when did your next artical will publish. Because it is very imp to me for further devlopment of my own rss reader.
thenks..
Hey, why you have to make a story out of it, can you go straight to the point?, it is really boring to read the first crap paragraph. When somebody comes to this page is to get the technical part as easy as you can provide it. No offense. And thanks for providing this control.
Nice artilce, I was looking for it :)
#Nguyen Vu
a easy way to deal rss feed by using rsstoolkit.dll
I have another question:
if I want to show contentType of each item, how can I write the binding sentence,
<%# DataBinder.EvalContainer.DataItem, "msdn:contentType") %> ?
but it doesn't work.
Does anyone else have some suggestions? Thanks.
Thanks for the good article. It realy helped.
Nice article!
I want to use yahoo weather RSS feeds .can you tell me how can I do That.
Reading some of these comments made me chuckle. People are expecting everything to be spoon-fed and immediately configured to work with their environments. FFS learn the framework instead of expecting someone else to do it for you.
If you dont know how to debug an app or how a dataset works, maybe you should try and master the basics first. Ya im looking at you guys: arjun, kyle... well actually most of the people that responded :P
You provided a great foundation for an RSS Reader. Good job and Thank you!
Hi, I'm trying to get the RSS Feed inorder to display the content on my page, but I'm getting the below error:
The remote name could not be resolved: 'developers.sun.com'
can anyone tell me how to resolve this problem.
Thanks in advance,
Kiran.
@Kiran: Are you using a proper RSS feed?
This was easy? If this is easy, then I think this is way beyond my learning ability. I'm just gonna go uninstall VB and go outside and chase cars or something...
@Jason: Chasings cars, while fun, isn't a good idea.
What did you have issues with? I may be able to help out.
PingBack from
Really nice article.. It is very helpful.
The people writing rude comments shouldn't take their frustrations out of the author. First of all, if you don't like the article, than you don't have to read it, the author is doing you a service by writing it at all and isn't required to do so. If you don't understand or if it isn't what you are looking for, go find another article or write one yourself.
Why can't we just use Windows RSS Platform?
|
http://blogs.msdn.com/coding4fun/archive/2006/10/31/912335.aspx
|
crawl-002
|
en
|
refinedweb
|
Use Test-Driven Development with mock objects to design object oriented code in terms of roles and responsibilities, not categorization of objects into class hierarchies.
Isaiah Perumalla
This article reviews the Prism project developed by the Microsoft patterns & practices group and demonstrates how to apply it to composite Web applications using Silverlight.
Shawn Wildermuth ...
We introduce you to “Oslo” and demonstrate how MSchema and MGraph enable you to build metadata-driven apps. We’ll define types and values in “M” and deploy them to the repository.
Chris Sells
MSDN Magazine February.
MSDN Magazine December 2007
Paul DiLascia
GET /foo.exe HTTP/1.1
Accept: */*
Accept-Language: en-us
Accept-Encoding: gzip, deflate
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1;
Q312461; .NET CLR 1.0.3705)
Host: localhost
Connection: Keep-Alive
HTTP/1.1 200 OK
Server: Microsoft-IIS/5.1
Date: Fri, 01 Feb 2002 02:11:29 GMT
Content-Type: application/octet-stream
Accept-Ranges: bytes
Last-Modified: Fri, 01 Feb 2002 01:41:16 GMT
ETag: "50aae089c1aac11:916"
Content-Length: 45056
<<stream of bytes from foo.exe>>
GET /foo.exe HTTP/1.1
Accept: */*
Accept-Language: en-us
Accept-Encoding: gzip, deflate
If-Modified-Since: Fri, 01 Feb 2002 01:41:16 GMT
If-None-Match: "50aae089c1aac11:916"
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1;
Q312461; .NET CLR 1.0.3705)
Host: localhost
Connection: Keep-Alive
HTTP/1.1 304 Not Modified
Server: Microsoft-IIS/5.1
Date: Fri, 01 Feb 2002 02:42:03 GMT
ETag: "a0fa92bc8aac11:916"
Content-Length: 0
// Strong naming is required when versioning
[assembly: AssemblyKeyFileAttribute("wahoo.key")]
[assembly: AssemblyVersionAttribute("1.2.3.4")]
string appbase = AppDomain.CurrentDomain.BaseDirectory;
private void InitializeComponent() {
System.Resources.ResourceManager resources =
new System.Resources.ResourceManager(typeof(MainForm));
•••
this.game.BackgroundImage = ((System.Drawing.Bitmap)
(resources.GetObject("game.BackgroundImage")));
•••
}
public MainForm() {
// Let the Designer-generated code run
InitializeComponent();
// Init culture-neutral properties
this.game.BackgroundImage = new Bitmap(typeof(MainForm),
"sblogo.gif"); // WARNING: Case sensitive
}
[assembly: NeutralResourcesLanguageAttribute("en-US")]
<configuration></configuration>
using System.Security;
using System.Security.Policy;
SecurityZone zone =
Zone.CreateFromUrl(appbase).SecurityZone;
[assembly: AllowPartiallyTrustedCallersAttribute]
C:\> ieexec.exe:\wahoo\deploy\wahoo.exe 3 3 00
|
http://msdn.microsoft.com/en-us/magazine/cc301790.aspx
|
crawl-002
|
en
|
refinedweb
|
Microsoft Corporation
Abstract
This white paper presents scenarios for creating highly available file and printer shares with the Cluster service feature in the Microsoft® Windows® 2000 operating system. Although this document by no means presents a comprehensive list of every possible scenario, it tries to provide enough examples so that administrators can implement Cluster service in the way that best serves their users' needs.
Introduction
Basic Terms and Concepts
Printer Shares
Conclusion
Related Links
Appendix A: Cluster Setup
Appendix B: Creating a Virtual Server
Appendix C: Creating a Print Spooler Resource with the Cluster Application Wizard
Appendix D: Configuring Failback
Print service availability is often considered critical in computer network deployments. Consequently, enabling administrators to create highly available print shares is one of the key goals of the Cluster service feature in the Microsoft® Windows® 2000 Advanced Server operating system. Because clustering is new to Windows, administrators must move away from machine-centric view of Windows.
This document covers the basic information you need to know to implement Windows 2000 Cluster service correctly and describes the shared-nothing architecture of Cluster service, group structure, and virtual servers. This document also describes in detail how to create print spooler resources, and the Appendix A provides a detailed illustration of the cluster setup that is referred to throughout this document.
This section covers the basic terms and concepts that are used throughout this document.
If you're new to clustering in Windows, you may become confused by the different naming conventions that are used. During beta testing of the Cluster service, the product was code-named Wolfpack. The product was released with the nameMicrosoft Cluster service and is sometimes referred to as MSCS. In addition, some Knowledge Base articles refer to the technology as Cluster Server or Server Clustering. By any name, the technology is the same.
Note: Do not confuse This type of clustering should not be confused with Network Load Balancing or Component Load Balancing.
Cluster service uses a shared-nothing model of clustering. This model does not allow cluster members to access resources on other members of the cluster, as a shared-everything model would. It also avoids the overhead and inherent scalability limits that would be imposed by a shared-everything model. In terms of file and printer shares, this means that a file share is owned by only one node. To split file share load among multiple servers, the file shares themselves must be split into different virtual servers.
A resource provides a service to a client, such as a file share, an Internet Protocol (IP) address, or a network name. Resources are the smallest management unit of Cluster service. Resources can depend on other resources and are organized into groups.
A dependency is a relationship between resources similar to service dependencies. Dependencies additionally define the order that resources are brought online and offline and are typically represented as dependency trees. Figure 1 illustrates a standard dependency tree for a file share resource. This illustration shows that should an administrator attempt to take the disk resource offline, the file share resource also goes offline, followed by the disk resource. The network name resource does not go to an offline state, because there is no direct dependency on the disk for the network name. Conversely, if we then try to bring the file share resource online when the initial state of all the resources is offline, the disk and IP address are brought online, followed by the network name, and then the file share resource. Note that there is no explicit dependency between the file share resource and the IP address. This is because dependencies are said to be transitive—a definition that is not technically correct but provides a good description of the empirical behavior of a dependency.
Given the above information, it would appear that a dependency is strictly a one-way relationship. This is not the case, however—a point which is best illustrated here by the print spooler resource. When adding a printer to a virtual server, it is absolutely necessary to run the Cluster Application Wizard from the network name for which the print spooler has a dependency. Applications, network name resources, and generic services also exhibit some similar behavior.
Note: For more information on dependencies, see the following Microsoft Knowledge Base articles:
171791 Creating Dependencies in Microsoft Cluster Server
198893 Effects of Checking "Use Network Name for Computer Name" in MSCS
195462 WINS Registration and IP Address Behavior for MSCS 1.0
A group is a logical collection of resources that must all run on the same node to function properly. For example, if the Information Store and Message Transfer Agent in Microsoft Exchange Server were to run on different servers, this would obviously generate some problems. This information also implies one very important fact: A group is the unit of failover in the cluster. Entire groups of resources move from one node to the other, not individual resources.
The easiest way to build groups is around storage. Most applications and services require storage to function, as in the Exchange example. Because dependencies can be created only within the same group, every application or resource that uses the same disk must be in the same group as the disk resource. Groups are also generally organized as virtual servers, though more than one virtual server can exist in a group.
Failover is the process by which a group moves from one server to another. Failover can occur for several reasons:
The administrator has moved a group to a new server.
A resource in the group has exceeded its failure threshold.
The group is configured for failback, and the original owner of the group has returned to service.
The third reason provides the definition for failback—that is, the group fails back to a preferred owner if so configured. Preferred owners are configured as a group property. Failback can be configured to occur either immediately or within a specific time period.
Note: For more information on failover and failback, consult the following Knowledge Base articles:
197047 Failover/Failback Policies on Microsoft Cluster Server
171277 Microsoft Cluster Server Cluster Resource Failover Time
A virtual server is a combination of two resources (for instance, an IP address resource and a network name resource) that work together to present a namespace to a client. Figure 2 illustrates the resources and dependency tree for a virtual server. Note that there is no reason that a group cannot contain multiple virtual servers, but this means that all virtual servers in the group can be owned by only one node at a time. Organizing groups into virtual servers provides finer granularity and better scalability, especially in scenarios where additional nodes are to be added later. Figure 3 illustrates the namespaces that are presented to a client by the cluster nodes.
Note: Virtual servers are based on NetBIOS and thus have certain limitations.
You can construct a virtual server in either of two ways. Although you can create a virtual server manually, the easiest method is to use the Cluster Application Wizard. The Cluster Application Wizard creates a virtual server with the option to configure a cluster server application. Creating a virtual server manually requires that an IP address and network name resource be configured in the appropriate group. For a detailed scenario that shows both procedures, see Appendix B.
Cluster service makes it possible to host multiple spooler resources. This allows the service to provide high availability for printer shares, along with the ability to distribute load among the cluster nodes. The print spooler resource is limited to one spooler resource per group. It has required dependencies for a network name resource (for consistent access to the spooler) and a physical disk (to store spooler files).
After a spooler resource is created, printers must be added to it. In Windows 2000, the job is greatly simplified, because the spooler resource maintains information about clustered printer ports in the cluster configuration database. This eliminates the need to install the ports twice, once on each node. Drivers, however, must be installed on both nodes, because printer drivers are copied to the PRINT$ share of the remote server.
Clustered print spoolers support only standard port monitors and Line Printer Remote (LPR) printers. LPR ports do not support bidirectional printing. Currently, no other ports are supported.
When a group containing a print spooler resource fails over to another node, the document that is currently being spooled to the printer is restarted from the other node after the failure. When you move a print spooler resource or take it offline, Cluster service waits until all documents are finished spooling or until the configured wait time has elapsed. Documents that are spooling from an application to a print spooler resource are discarded and must be re-spooled to the resource (or reprinted) if the group containing the resource fails over before the application has finished spooling.
Printers hosted by a cluster node are added to Active Directory by the spooler service.
The print spooler resource has required dependencies on a network name and a storage class resource. Figure 2 shows the dependency tree for a spooler resource.
On the resource parameters page of the New Resource Wizard, the only parameters provided are the location of the spool folder and the duration of the job completion timeout. The wizard, based on dependencies, provides these parameters automatically, so there is usually no need to modify them.
Creating the virtual server and the spooler resource is only half of the job. The spooler resource is useless without printers. To add a printer, you must first note the owning node of the virtual servers, because the Add Printers Wizard copies the drivers to the owning node only. The driver must then be manually installed on the other node.
The Add Printers Wizard must be started from the virtual server—specifically, the network name on which the print spooler depends. After the printer is successfully installed, you must add the driver to the other node. Although several different methods exist to add the drivers to the other node, the command line shown in Figure 6 below starts the Add Printers Wizard directly. You might find it useful to create a shortcut to this command line on a file and print cluster.
For a detailed scenario, see Appendix C.
Adding additional drivers that are not Windows 2000 drivers is relatively uncomplicated. After the initial drivers are installed, you can install additional drivers through the printer user interface (UI). The procedure is as follows:
Connect to the virtual server.
Open the Printers folder.
Right-click the printer to add drivers to, and then click Properties.
Click the Sharing tab, and then click the Additional drivers option.
Once the driver has been added, return to the Printers folder.
Fail the group to the other node.
Repeat steps 3 through 6.
Reskit.com has multiple printer servers it would like to consolidate to one cluster, SEA-NA-CLUS-02. There are approximately 200 printers, and performance is of prime importance.
Implementation
To achieve better performance, the printers are split roughly evenly between two virtual servers, SEA-NA-PRINT-01 and SEA-NA-PRINT-02. Each virtual server is configured in its own resource group with a print spooler resource. Half the printers are installed on one virtual server with the remaining half allocated for the other virtual server. Preferred owners are configured for each group, and each is assigned to a particular group. Failback is configured to occur between 23 and 0 hours.
Procedural Overview
Create virtual servers SEA-NA-PRINT-01 and SEA-NA-PRINT-02, adding a print spooler in each virtual server.
Add printers to the first node.
Add printers to the second node.
Configure failback policy for each group SEA-NA-PRINT-01 as follows:
Right-click the group, and then click Properties.
On the General tab, click Modify to add SEA-NA-CLN-01 as the preferred owners.
In the Available Nodes box, select SEA-NA-CLN-01, and then click the right arrow button to move the node to the Preferred Owners box. Click OK.
Click the Failback tab, and then click Allow failback.
Click Failback between, and then enter 23 and 0 hours. Click OK.
Repeat steps a through e for SEA-NA-PRINT-02 and SEA-NA-CLN-02, respectively.
Move groups to their preferred owners.
For a detailed scenario, see Appendix D.
Because of the importance of print services to the end users on a computer network, Windows 2000 Advanced Server provides Cluster service, a powerful feature for improving the reliability and performance of print services. However, you must deploy Cluster service properly to provide these benefits to your users.
Since clustering is new to Windows, you may be required to look at familiar tasks in new ways. For example, although clustered file shares behave exactly like normal file shares, creating and administering them is a bit different, because they run on multiple machines.
Moreover, you must create clustered file shares using the Cluster Administrator in Administrative Tools, and they can involve more complicated administration of permissions. Similarly, the use of clustering for printer services requires that you apply dependencies for a network name resource and a physical disk.
For more information about Windows 2000 Cluster service, see the following resources:
Step-by-Step Guide to Installing Cluster Service
Server Cluster Glossary
Windows 2000 Advanced Server, Clustering Microsoft ITG Infrastructure Services
Cluster Management and Administration
Windows Clustering Technologies
See also the following Knowledge Base articles:
259267 Microsoft Cluster Service Installation Resources
258750 Recommended Private "Heartbeat" Configuration on Cluster Server
174812 Effects of Using Autodetect Setting on Cluster NIC
278710 No Global Groups are Available Creating File-Share Permissions
The heartbeat cards were set to a speed and configured according to information from the following articles in the Microsoft Knowledge Base:258750 Recommended Private "Heartbeat" Configuration on Cluster Server
174812 Effects of Using Autodetect Setting on Cluster NIC
Warning: At no time should both systems be booted with access to the shared resources, unless at least one of them has Cluster service. Install the first node with the second node down, and then boot the second node and install Cluster service. FAILURE TO HEED THIS WARNING COULD RESULT IN FILE SYSTEM CORRUPTION THAT MAY BE UNDETECTABLE UNTIL A LATER DATE.
For additional cluster installation resources, see also the Knowledge Base article259267 Microsoft Cluster Service Installation Resources as well as
Using the Cluster Application Wizard
To start the Cluster Application Wizard, open the Cluster Administrator. On the File menu, click Configure Application.
At the Welcome to the Cluster Application Wizard page, click Next.
On the Select or Create a Virtual Server page, you can configure an application for an existing virtual server or create a virtual server. Because this section is about creating virtual servers, Create a new virtual server is selected.
The next page is used to select a group for the virtual server or to create a new group. In general, you should use an existing cluster group, unless storage class resource is either not needed or will be created or moved to the new group. The storage class resource to be used by this virtual server is located in Disk Group 1.
Even though an existing group is selected, you'll want to give the group a new name and description to reflect its current function. In this particular instance, the group is named after the virtual server, but this is not necessary and may not even be desirable because a group can host more than one virtual server.
After you configure the group name and description, the IP address and network name for the virtual server are provided.
After the virtual server properties are provided, the wizard lets you edit detailed information about the group and resources. This is not necessary at this time, so it is skipped.
At this point, the virtual server is configured. The wizard prompts you to create an application resource. In this example, only the virtual server itself is of interest, so select No, I'll create a cluster resource for my application later.
The wizard then provides a status page.
After you click Finish, the group is created in an offline state. To fully test the virtual server, right-click the group, and then click Bring Online.
If all is well, the group should come completely online. To check the virtual server, ping from a client by IP address. To test name resolution, ping by name.
Using the Manual Method
Rather than use the Cluster Application Wizard, you can create a virtual server manually simply by adding an IP address and a network name to an existing group. To begin, right-click the group in which the virtual server will exist, click New, and then click Resource. In this example, the Disk Group 2 was clicked. This starts the New Resource Wizard.
The first resource that must be created is an IP address resource. This example shows the creation of the SEA-NA-FILE-02 virtual server. The first step is to identify the resource type. In the Name box, you can type a convenient label. The Description box lets you provide more detailed information.
The next page in the wizard lets you specify possible owners. By default, all nodes in the cluster are possible owners. If a node is removed from the possible owners list of a resource, the group in which the resource exists cannot fail over to that node. In this example, the group is intended for failover, so the default is used.
The next step in correctly adding the resource is to configure resource dependencies. IP addresses have no dependencies, so you can leave this page as is.
The final step is to configure resource-specific properties. An IP address has the following parameters:
Address. The IP address that will be registered on the adapter.
Subnet mask. The subnet mask that will be used for this IP address. This field is populated automatically.
Network. The network on which the IP address will be registered.
Enable NetBIOS for this address. Clear this check box if NetBIOS is not needed for this virtual server. NetBIOS is required for browsing to function properly. There is a limit of 64 NetBT devices in Windows. If this limit is exceeded, additional IP address resources cannot use NetBIOS.
When you click Finish, the IP address is created and a confirmation message appears.
Now you must configure a network name for the virtual server. The procedure is much the same, with the exception of the way you select the type of resource dependencies and resource-specific parameters. You start the New Resource Wizard from Disk Group 2 by right-clicking the group, clicking New, and then clicking Resource. The resource type is, of course, a network name.
Leave Possible owners at the default setting.
A dependency is configured for the previously created IP address. A dependency on an IP address is required for a network name.
The only resource-specific parameter for a network name is the name itself. This is the name of the virtual server on the network, and it must conform to standard computer naming rules.
Click Finish to create the network name.
At this point, you can bring the resource group online in the same manner described in the Cluster Application Wizard example. To give the group a more descriptive name, right-click the group, and then click Rename on the shortcut menu.
In this example, the Cluster Application Wizard is used to create the virtual server SEA-NA-PRINT-01 and configure the print spooler resource.
In this example, the intent is to create a new virtual server and configure the application in one step. Select Create a new virtual server, and then click Next.
For this example, select an existing resource group with a storage class resource. This virtual server uses the storage class resource in Disk Group 3.
The group name and description are provided.
No additional configuration items are needed for the Advanced Properties page. Click Next to create the virtual server.
On the next screen, Yes, create a cluster resource for my application now is selected. Click Next.
The resource type is Print Spooler. Click Next.
The name and description for the resource are provided. To add dependencies, click Advanced Properties.
In the Advanced Resource Properties dialog box, click the Dependencies tab, and then click Modify.
In the Modify Dependencies dialog box, move the appropriate dependencies from the Available Resources list to the Dependencies list.
After you have configured the dependencies, click OK to close all the dialog boxes until you see the screen below. The resource is created, and the wizard prompts you for resource-specific parameters. The wizard fills in the parameters based on dependencies. Usually, no additional modification is needed, so you can click Next.
The virtual server and spooler resource are now configured. The group can now be brought online.
In this example, the connection must be made to SEA-NA-PRINT-01. To start the connection, on the Start menu, click Run.
The Printers folder is selected.
From the Printers folder, start the Add Printer Wizard. From a remote connection, the only option is to add a printer to the remote print server.
Because this is a new printer, you must create a new port.
The Add Standard TCP/IP Printer Port Wizard is started.
The port name field is populated as soon as you type the printer name or IP address.
After you configure the port, the Add Printer Wizard prompts you for a driver. In this example, a Lexmark Optra is selected.
You then name the printer.
The share name is populated automatically, but you can change it.
You can then provide information about the printer's location and capabilities.
When you click Next, the status screen shows you that the printer has been installed.
To start the connection, on the Start menu, click Run. The command shown in the Figure 50 starts the Add Printer Driver Wizard.
You can use this wizard to install the necessary print drivers for the other node.
This step is necessary only for the first Windows 2000 printer driver. After you complete this step, you can install additional drivers for other platforms through the printer user interface.
This section describes how to configure a group for failback to a preferred owner. The example is drawn from the scenario for creating normal file shares and configuring failback.
To begin configuring failback, right-click the group, and then click Properties.
On the General tab in the group Properties dialog box, click Modify.
In the Modify Preferred Owners dialog box, add the desired preferred owner.
This example is drawn from the first scenario, so SEA-NA-CLN-01 is added.
On the Failback tab, the default option of Prevent failback is changed to Allow failback, with Failback between configured for 23 and 0 hours.
When you click OK, the process is complete and the group is configured for failback to a preferred owner.
|
http://technet.microsoft.com/en-us/library/bb727115.aspx
|
crawl-002
|
en
|
refinedweb
|
Basetypes, Collections, Diagnostics, IO, RegEx...
Recently, a number of questions have surfaced about the accuracy of the .NET Framework when working with the binary representation of numbers. (For example, see.) The issue surfaces most clearly when we convert the hexadecimal or octal string representation of a numeric value that should be out of range of its target data type to that data type. For example, in the following code we would expect that an OverflowException would be thrown when we increment the upper range of a signed integer value by one, call the Convert.ToString method to convert this integer value to its hexadecimal string representation, and then call the Convert.ToInt32 method to convert the string back to an integer. Here is the C# an integer.
// We expect that this will throw an OverflowException, but it doesn't.
try {
int targetNumber = Convert.ToInt32(numericString, HEXADECIMAL);
Console.WriteLine("0x{0} is equivalent to {1}.",
numericString, targetNumber);
}
catch (OverflowException) {
Console.WriteLine("0x{0} is out of the range of the Int32 data type.",
numericString);
}
And here is the equivalent Visual Basic code: an integer.
' We expect that this will throw an OverflowException, but it doesn't.
Try
Dim targetNumber As Integer = Convert.ToInt32(numericString, HEXADECIMAL)
Console.WriteLine("0x{0} is equivalent to {1}.", _
numericString, targetNumber)
Catch e As OverflowException
Console.WriteLine("0x{0} is out of the range of the Int32 data type.", _
numericString)
End Try
Instead of the expected OverflowException, this code produces what is apparently an erroneous result:
0x80000000 is equivalent to -2147483648
If we look at the binary rather than the decimal and hexadecimal representations of this numeric operation, the source of the problem becomes readily apparent. We began with Int32.MaxValue:
Bit #: 3 2 1
10987654321098765432109876543210
01111111111111111111111111111111
For Int32.MaxValue, each bit except the highest order bit of the 32-bit value is set. This represents the maximum value of a signed integer because the single unset bit is the sign bit in position 31. Because this bit is unset, it indicates that the value is positive. We then increment Int32.MaxValue by 1. Note that the variable to which we assign the new value is an Int64; we cannot assign the value to an Int32 without exceeding the bounds of the Int32 data type and causing an OverflowException to be thrown. The new bit pattern of the resulting value is:
Bit #: 6 5 4 3 2 1
3210987654321098765432109876543210987654321098765432109876543210
0000000000000000000000000000000010000000000000000000000000000000
So incrementing Int32.MaxValue by one sets bit 31 and clears bits 0 through 30. Bits 32 through 62 remain unset and the sign bit in position 63 is set to 0, which indicates that the resulting value is positive.
Because leading zeroes are always dropped from the non-decimal string representations of numeric values, the call to Convert.ToString(value, toBase) produces a binary string whose length is 32:
Bit #: 3 2 1
10987654321098765432109876543210
10000000000000000000000000000000
This suggests that the unexpected output produced by our code is the result of two different programming errors. First, we’ve inadvertently allowed the string representation of a 64-bit signed integer value to be interpreted as the string representation of a 32-bit signed integer value. Second, by ignoring how signed and unsigned integers are represented, we’ve allowed a positive integer to be misinterpreted as a signed negative integer. Let’s look at each of these issues in some detail.
Ordinarily, the C# compiler enforces type safety by prohibiting implicit narrowing conversions, and the Visual Basic compiler can be configured to prohibit implicit narrowing conversions by setting Option Strict on. This constraint means that, in order to successfully compile code that performs a narrowing conversion, the developer must explicitly use a C# casting operator or a Visual Basic conversion function. This, of course, requires that the developer be aware of the narrowing conversion. In other words, handling a narrowing conversion is the responsibility of the developer.
For example, if the previous code is rewritten so that it does not have to parse the string representation of a numeric value, we must deal with the fact that an Int64 cannot be safely converted to an Int32. The resulting C# code is:
// Increment a number so that it is out of range of the Integer type.
long number = (long)int.MaxValue + 1;
// Convert the number back to an integer.
// This will throw an OverflowException if the code is compiled
// with the /checked switch.
try {
int targetNumber = (int)number;
Console.WriteLine("Converted {0} to a 32-bit integer.", targetNumber);
}
catch (OverflowException) {
Console.WriteLine("{0} is out of the range of the Int32 data type.",
number);
}
If Option Strict is set on, the resulting Visual Basic code is:
' Increment a number so that it is out of range of the Integer type.
Dim number As Long = CLng(Integer.MaxValue) + 1
' Convert the number back to an integer.
' This will throw an OverflowException.
Try
Dim targetNumber As Integer = CInt(number)
Console.WriteLine("Converted {0} to a 32-bit integer.", targetNumber)
Catch e As OverflowException
Console.WriteLine("{0} is out of the range of the Int32 data type.", _
number)
End Try
Conversions can still produce overflows at run time, but at least the compiler alerts the developer that an overflow is possible and should be handled. However, because our original example converted a numeric value to its string representation and then converted it back to a numeric value, we’ve bypassed the safeguards that the compiler implements to alert us to the possibility of data loss in a narrowing conversion. To put it another way, the developer is solely responsible for ensuring type safety and for handling conversions when converting between numbers and their string representations. Had our code enforced type safety, it would have converted the string representation of Int32.MaxValue + 1 to an Int64 value rather than an Int32 value, as the following C# code a long integer.
long targetNumber = Convert.ToInt64(numericString, HEXADECIMAL);
Console.WriteLine("0x{0} is equivalent to {1}.",
numericString, targetNumber);
The equivalent Visual Basic code is: a long integer.
Dim targetNumber As Long = Convert.ToInt64(numericString, HEXADECIMAL)
Console.WriteLine("0x{0} is equivalent to {1}.", _
numericString, targetNumber)
A second serious source of error in our initial example is that we’ve failed to consider numeric representations and their effect on our conversion operation. This is a common source of errors in programs. However, while the compiler provides some safeguards against data loss in narrowing conversions, it provides no safeguards when the developer chooses to work with binary data directly. In these cases, ensuring that the representation of a number is appropriate for the operation being performed is always the responsibility of the developer. This is true whenever the developer works with binary (or octal or hexadecimal) data directly either as a sequence of bits (for example, when the developer performs bitwise operations on two values or as a byte array) or when the developer is working with the non-decimal string representation of a numeric value. Moreover, this is true of any platform and is not limited to Microsoft Windows or the .NET Framework. In particular:
Our initial example produced unexpected results because we passed the string representation of what turned out to be an unsigned 32-bit integer to a conversion method, Convert.ToInt32(value, fromBase), that expected the value parameter to be the string representation of a signed 32-bit integer. Note that the actual result of this conversion depends on the particular magnitude of the 32-bit unsigned integer, as the following table illustrates.
A clearer illustration of the problems that result from working with binary values that have different numeric representations arises when we perform a bitwise operation on integers with different signs. For example, the Visual Basic code
Console.WriteLine(16 And -3)
produces a rather unexpected result of 16 when run under the common language runtime. This result reflects the fact that the runtime uses two’s complement representation for negative integers and absolute magnitude representation for positive integers. The following example illustrates why the result of this bitwise And operation is 16:
00000000000000000000000000010000
And 11111111111111111111111111111101
00000000000000000000000000010000
Although the .NET Framework uses two’s complement representation for signed integers, one’s complement representation is also in use on some platforms. We can determine the method of representation with the two utility functions shown in the following C# and Visual Basic code:
// C#
public class BinaryUtil
{
public static bool IsTwosComplement()
{
return Convert.ToSByte("FF", 16) == -1;
}
public static bool IsOnesComplement()
{
return Convert.ToSByte("FE", 16) == -1;
}
}
' Visual Basic
Public Class BinaryUtil
Public Shared Function IsTwosComplement() As Boolean
Return Convert.ToSByte("FF", 16) = -1
End Function
Public Shared Function IsOnesComplement() As Boolean
Return Convert.ToSByte("FE", 16) = -1
End Function
End Class
Performing the And operation with integers that have different signs then requires that we use a common method to represent their values. The most common method is a sign and magnitude representation, which uses a variable to store a number’s absolute value and a separate Boolean variable to store its sign. Using this method of representation, we can define the And operation as follows:
// C#
public static int PerformBitwiseAnd(int operand1, int operand2)
{
// Set flag if a parameter is negative.
bool sign1 = Math.Sign(operand1) == -1;
bool sign2 = Math.Sign(operand2) == -1;
// Convert two's complement to its absolute magnitude.
if (sign1)
operand1 = ~operand1 + 1;
if (sign2)
operand2 = ~operand2 + 1;
if (sign1 & sign2)
return -1 * (operand1 & operand2);
else
return operand1 & operand2;
}
' Visual Basic
Public Function PerformBitwiseAnd(ByVal operand1 As Integer, ByVal operand2 As Integer) As Integer
' Set flag if a parameter is negative.
Dim sign1 As Boolean = (Math.Sign(operand1) = -1)
Dim sign2 As Boolean = (Math.Sign(operand2) = -1)
' Convert two's complement to its absolute magnitude.
If sign1 Then operand1 = (Not operand1) + 1
If sign2 Then operand2 = (Not operand2) + 1
If sign1 And sign2 Then
Return -1 * (operand1 And operand2)
Else
Return operand1 And operand2
End If
End Function
While converting binary values to sign and magnitude representation solves the problem of performing binary operations on non-decimal numbers, it does not address either of the issues raised when converting the string representation of a non-decimal number to a numeric value. When performing such string-to-numeric conversions, the root of the problem lies in the fact that at the time it is created, the string representation of a number is effectively disassociated from its underlying numeric value. This can make it impossible to determine the sign of that numeric string representation when it is converted back to a number.
However, we can solve the problem of restoring a non-decimal value from its string representation by defining a structure that includes a field to indicate the sign of the decimal value. For example, the following structure includes a Boolean field, Negative, that is set to true when the numeric value from which a non-decimal string representation is derived is negative. It also includes a Value field that stores the non-decimal string representation of a number.
// C#
struct NumericString {
public bool Negative;
public string Value;
}
' Visual Basic
Public Structure NumericString
Public Negative As Boolean
Public Value As String
End Structure
Storing a sign flag together with the string representation of a non-decimal number preserves the tight coupling between the string representation of a number and its sign. This in turn allows us to examine its sign field and to make sure that the appropriate conversion or action is taken when the string is converted back to a numeric value. For example, the following code defines a static (or Shared in Visual Basic) method named ConvertToSignedInteger that takes a single parameter (an instance of the NumericString structure defined previously) and returns an integer. The method throws an OverflowException if the string’s numeric value overflows the range of the Int32 data type. It also throws an OverflowException if the NumericString.Negative field is False, indicating that the numeric value is negative, but the sign bit is set in the numeric value represented by the NumericString.Value field. This indicates that the numeric value is positive but that its value lies in the range from Int32.MaxValue + 1 to UInt32.MaxValue, which lies entirely outside the range of the Int32 data type.
// C#
class ConversionLibrary
{
public static int ConvertToSignedInteger(NumericString stringValue)
{
// Convert the string to an Int32.
try
{
int number = Convert.ToInt32(stringValue.Value, 16);
// Throw if sign flag is positive but number is interpreted as negative.
if ((! stringValue.Negative) && ((number & 0x80000000) == 0x80000000))
throw new OverflowException(String.Format("0x{0} cannot be converted to an Int32.",
stringValue.Value));
else
return number;
}
// Handle legitimate overflow exceptions.
catch (OverflowException e)
{
throw new OverflowException(String.Format("0x{0} cannot be converted to an Int32.",
stringValue.Value), e);
}
}
}
' Visual Basic
Public Class ConversionLibrary
Public Shared Function ConvertToSignedInteger(ByVal stringValue As NumericString) As Integer
' Convert the string to an Int32.
Try
Dim number As Integer = Convert.ToInt32(stringValue.Value, 16)
' Throw if sign flag is positive but number is interpreted as negative.
If (Not stringValue.Negative) And ((number And &H80000000) = &H80000000) Then
Throw New OverflowException(String.Format("0x{0} cannot be converted to an Int32.", _
stringValue.Value))
Else
Return number
End If
' Handle legitimate overflow exceptions.
Catch e As OverflowException
Throw New OverflowException(String.Format("0x{0} cannot be converted to an Int32.", _
stringValue.Value), e)
End Try
End Function
End Class
Our initial code example returned an erroneous result when we incremented Int32.MaxValue by 1, converted it to a hexadecimal string, and then converted the string back to an integer value. When we perform the same basic set of operations using the NumericString structure and the ConvertToSignedInteger method, the result is an OverflowException. This is shown in the following code:
// C#
public class Executable
{
public static void Main()
{
// Define a number.
Int64 number = (long)Int32.MaxValue + 1;
// Define its hexadecimal string representation.
NumericString stringValue;
stringValue.Value = Convert.ToString(number, 16);
stringValue.Negative = (Math.Sign(number) < 0);
ShowConversionResult(stringValue);
NumericString stringValue2;
stringValue2.Value = Convert.ToString(Int32.MaxValue, 16);
stringValue2.Negative = Math.Sign(Int32.MaxValue) < 0;
ShowConversionResult(stringValue2);
NumericString stringValue3;
stringValue3.Value = Convert.ToString(-16, 16);
stringValue3.Negative = Math.Sign(-16) < 0;
ShowConversionResult(stringValue3);
}
private static void ShowConversionResult(NumericString stringValue)
{
try {
Console.WriteLine(ConversionLibrary.ConvertToSignedInteger(stringValue).ToString("N0"));
}
catch (OverflowException e) {
Console.WriteLine("{0}: {1}", e.GetType().Name, e.Message);
}
}
}
' Visual Basic
Module Executable
Public Sub Main()
' Define a number.
Dim number As Int64 = CLng(Int32.MaxValue) + 1
' Define its hexadecimal string representation.
Dim stringValue As NumericString
stringValue.Value = Convert.ToString(number, 16)
stringValue.Negative = (Math.Sign(number) < 0)
ShowConversionResult(stringValue)
Dim stringValue2 As NumericString
stringValue2.Value = Convert.ToString(Int32.MaxValue, 16)
stringValue2.Negative = Math.Sign(Int32.MaxValue) < 0
ShowConversionResult(stringValue2)
Dim stringValue3 As NumericString
stringValue3.Value = Convert.ToString(-16, 16)
stringValue3.Negative = Math.Sign(-16) < 0
ShowConversionResult(stringValue3)
End Sub
Private Sub ShowConversionResult(ByVal stringValue As NumericString)
Try
Console.WriteLine(ConversionLibrary.ConvertToSignedInteger(stringValue).ToString("N0"))
Catch e As OverflowException
Console.WriteLine("{0}: {1}", e.GetType().Name, e.Message)
End Try
End Sub
End Module
When this code is executed, it displays the following output to the console:
OverflowException: 0x80000000 cannot be converted to an Int32.
2,147,483,647
-16
This is an excellent examination of the topic. However, the conclusion that the .NET Framework is handling this situation correctly is flawed, for one simple reason:
"Because leading zeroes are always dropped from the non-decimal string representations of numeric values..."
This is wrong!! When representing negative values using binary two's complement, leading zeros are significant, and cannot be dropped. Dropping them changes the value of the number, and therefore the behavior of the Convert.ToString method is wrong. Simply adding a comment to the documentation saying "its wrong on purpose" is not sufficient; the buggy behavior needs to be fixed.
I’m not sure the previous post is correct. If the leading bits are zero’s and they are dropped then when the number is converted back from binary it will just pad it back out to 32 bits with zeros. This would put bit 31 back as a zero and indicate that the number is positive.
If the original value was negative, then the bit would be a 1 and would not have been trimmed.
So, this is int.maxvlue
01111111111111111111111111111111
This is int.maxvalue with the leading zero missing
1111111111111111111111111111111
And then this is what would happen when it is converted by Convert.Toint32()
The zero would be “assumed” by the conversion process. Thus the sign would end up correct.
@Aaron,
You are assuming that the string representation will always be converted back into a number with the same storage size. But this article is about what happens when the binary representation is converted into a number with a different storage size.
Why make the developer jump through all these hoops to correctly convert the binary representation from one size to another, when all you have to do is treat at least one leading zero is significant? Then the binary representation can be counted on to be accurate regardless of how long it is or what the size was of the location where the value was originally stored. In the example from the article, the result of Convert.ToString(number, HEXADECIMAL) would be a string 33 digits long, which would result in an OverflowException, which is exactly what the developer wants.
In ConversionLibrary.ConvertToSignedInteger, wouldn't this comparison make more sense:
// Throw if sign flag is positive but number is interpreted as negative.
if ((!stringValue.Negative) && (number < 0))
throw new OverflowException(String.Format("0x{0} cannot be converted to an Int32.", stringValue.Value));
This way you're not relying on the bit pattern (which actually will cause an implicit widening anyway).
@David,
The only way that preserving the leading digit will help you is if you have a rule to automatically sign-extend that leading digit whenever converting away from a string. And I guarantee you that doing that will break almost every app in the world, since strings frequently come from user input or external data sources that won't contain a leading sign digit anyway.
The problem mentioned in this article only exists if you do have a mismatch in sizes between your string generator and consumer; if due care is taken to use the same types then there isn't a problem. Or at least there wouldn't be a problem if VB joined the Real World and got itself some unsigned types.
@Miral,
If there is no leading sign digit, then it isn't a signed number, and a conversion to a signed type is likely to fail anyway. I'm not concerned with that scenario. I am talking about precisely representing a signed number as a string, which can easily be done by treating a leading 0 as significant. If you truncate all of the leading 0s, then you have to carry the original binary length of the number around with the string representation in order to be able to convert it back to a numeric type. Why force developers to take that extra step, when you could just leave a leading 0 and trust that the string representation is always exact, no matter what length it is?
I don't think I understand why the Convert.ToInt32 operation is working correctly when the example bit pattern (from the paragraph before the "Accidental Change of Type" section) implies the value -0.
Bit #: 3 2 1
10987654321098765432109876543210
10000000000000000000000000000000
It seems like this specific value is clearly the result of an overflow. (Is -0 legal? If so, why?) However, it would be impossible to say that (int.MaxValue + 2) is or isn't -1 during the conversion, so maybe it makes sense that checking for the specific -0 case is a waste of time.
|
http://blogs.msdn.com/bclteam/archive/2008/04/09/working-with-signed-non-decimal-and-bitwise-values-ron-petrusha.aspx
|
crawl-002
|
en
|
refinedweb
|
Design Guidelines, Managed code and the .NET Framework
A recent internal thread and a little nudge inspired me to offer this little quiz to keep the old grey matter working over the holiday break.
In V2.0, does this code compile? If not why not and how would you fix it?
Obviously the quiz is a little more challenging if you attempt it with out the aid of the compiler…
using System;
using System.Threading;
public class Class1
{
public static void Main () {
new Thread(delegate { Console.WriteLine("On another thread"); }).Start();
}
}
|
http://blogs.msdn.com/brada/archive/2004/12/21/329270.aspx
|
crawl-002
|
en
|
refinedweb
|
Where: Omni Shoreham Hotel, Washington DC
Registration link: Register online here
Description of the Conference:
The response to the Washington DC ITARC has been amazing. Attendance is way up thanks to many organizations sending their entire teams.
Send out the email to a colleague or a friend. Have them put your name in the Referred By box when signing up.
Whoever gets the most wins!
Please take this opportunity to help us really blow the doors off of our first Washington DC conference. All you have to do to support your chapter is send out this flyer and attend the event.
The conference only holds 300 so get registered today.
ITARC 2007 Agenda - The following lists the sessions available at the event. Attendees will be able to get a DVD with ALL sessions from all three ITARC locations (over 100 sessions). Check out a few of the events awesome lineup of speakers
Register before September 10, 2007 to receive a $100 discount. IASA members receive an additional discount in addition to the early bird special:
Register by September 10, 2007
IASA member Non-member
$350
$500
Register after September 10, 2007
$450
$600
Questions or comments? Please contact events@IASAhome.org.
In the last step we decided that a combination of federation and centralization will suit the enterprise best. So the underlying question is; what parts are centralized and what parts are federated? The answer is ... it depends! There are a variety of factors that go into the choosing. Lets go over a few of the big ones. Remember, we don't need to 100% right, just close enough to engage the rest of the team in the conversation.
Question: How tightly or loosely coupled should the parts be?
Coupling is a subject tossed about at all levels. Components, applications, services, and architectures all debate when some is ok, more is better and too much is, well, too much. I do believe the consensus is to limit tight coupling within the architecture to allow for future changes in the enterprise. There are any number of good patterns to accomplish this. Edge services and providers comprised of Brokers, Plugs, and Adapters to monitor and react to the current state of things, some variant of Bus can provide an architectural channel separating participating parts as well as creating a mechanism for implementing canonical semantics, or simple facades could be used to present and transform on a boundary.
For Message2You, the choices are driven by the current environment. Organization defines architecture. As a small company Message2You is setup as a light weight hierarchy. Cxo level managers have regional manager as direct reports. This is important. The architecture for an enterprise organized round business units tends to be strongly centralized with each business unit using most if not all shared corporate capabilities. Enterprises that are organized around geographies tend to be largely self supporting in each region and only lightly touching the corporate core. Here is an example of an enterprise that supports business units with shared corporate resources;
compare that to a more regionalized enterprise;
This second view looks like a good place to start.
The enterprise architecture has three major components areas to refine;
Next step: Select specific implementation architectures for each interation within the enterprise.
During the last step we began scratching the surface of the needs of the enterprise. Clearly it was too broad a description to make any significant architectural decisions. Agile practices provide us a simple and effective approach to address this... "involving the customer". As we begin assembling the architecture we need to repeatedly, on might even say incrementally, engage the many representative communities the architecture is intended to support. But how best to do this? How do we create something light-weight and easily consumable by application users, Project leads, IT managers, and CO's? Personally I am a fan of the "Boxes and Lines" model so lets run with that for now.
For a macro level architecture a "Boxes and Lines" presentation might look something like this;
When we use Peer to Peer and the intent is to make every participant equal. Or maybe this;
a Centralized approach where all the participants share a single core but don't interact with each other. Another alternative might look like this;
A federation of participants comprised of mixed communication networks. Most participants work autonomously but choose one or more relationships within the federation.
Lets look at each and how well it does or doesn't support the enterprise. In our fictional organization, Message2You we have a few guiding comments to consider;
To this lets add a few more things learned during our on-going conversation with individuals throughout the enterprise;
So, are we looking at a Peer to Peer, Federated, or Centralized Architecture for Message2You?
Peer to Peer is a contender but isn't a great fit for an implementation that needs to propagate changes quickly to all points. It would however be a good choice for a highly reliable system and spreading the processing load may also prove and excellent way to scale without negatively impacting user perceived performance.
Centralized looks great on the surface. Most of the control, monitoring, and performance issues are quickly settled in a centralized system. It presents fewer moving parts and limited deployment issues. Centralization fails to support multiple independent customer organizations.
Federated addresses the multiple customer organizations by allowing each participant solution to work as independently or as connected as it might but fails terribly on rapid dissemination of change as well as it's increased management and monitoring complexities.
If none of the macro architectures work what do we do? Well there are two possibilities; first we can address the requirements and scope. It ay be possible to negotiate with our customer and re-define success in such a way that it will better fit one of the macro architecture patterns. Second we can look at combining the macro architectures and into a more problem specific composite architecture.
I'm lean strongly toward the composite architecture. One that provides the autonomous support of the federated architecture with the ease of management and responsiveness of a centralized architecture. I do not see the need for a peer to peer implementation with the requirements as we know them, but that may well change as we discover more. For now lets use this as a starting point;
This approach is centralized in that all the participating organization are required to use a centralized "core" to interact with any other organization while it has traits of federated allowing each participating organization to operate with some degree of autonomy.
It's still not enough to engage the customer in discussion. For that we need to go down another level of detail and really begin to make it address the specific issue of the Message2You enterprise.
Next step: Further refining the architecture
Now
Back)..
Selecting.
Enterprise.
Application development code is too focused and too volatile to be promoted across the enterprise, and enterprise code is too generic and too hard to change for it to precisely address application needs. Coming to an agreement on these ownership / boundaries within the enterprise is critical for the intertwined success of all parties involved.
As a first step, I am proposing a simple ownership scheme. Given the four major types of binaries (user, application, domain, plumbing) along the X axis and application and enterprise developers along they Y axis. We can plot two curves reflecting who the "level of influence" assignable to either the developers or the enterprise. (see figure below)
The separation is not artificial, and reflects three very important principles;
Next, we will look at implementing the parts within the control of the Enterprise Architecture.
High quality applications are simple. They do some one thing well. The classic example is the famous Hello World app. It simple displays the text back to the caller. While there are vast language dependant permutations lets follow our own advice, keep it simple, and stick with C#.
using System;
class HelloWorldApp
{
public static void Main()
{
Console.WriteLine("Hello, world!");
}
}
Simple, Easy, effective … what more could one want? How about making this enterprise ready.
First we need to get more serious about what Enterprise Ready means. On the short list, and enterprise ready application would;
Left to fend for itself, this is nearly punitive overhead for a simple application. To keep it simple and meet the enterprise needs the people responsible for the enterprise architecture need to provide the parts and examples on their application (not their internals) to the developers. If the enterprise parts exists the changes to the code may be a few extra parts and a few additional lines of code.
A client windows application will need to be built to call a server hosted façade.
The façade would implant the enterprise security and call a refactored, generic text handler. Except for the text handlers overloaded interfaces, it need only know about the enterprise data access component. Every thing else would be invisible to the developer.
By most estimates the application would still be considered simple;
Windows form code;
private void button1_Click(object sender, System.EventArgs e)
myFacade myF = new myFacade();
CultureInfo cu = new CultureInfo( "es-ES", false );
string s = myF.getText(text, cu);
this.textBox1.Text = s;
Façade code;
using System.Globalization.CultureInfo;
namespace MyCompany
public class getText
public string getText(string text, CultureInfo cu)
{
string s = MyCompany.Dattacess.getCutlureText(text, cu);
return s;
}
Complexity for the enterprise is provided by the enterprise;
So a simple application can comply with the enterprise architecture but ONLY IF the enterprise invests in providing the application developers enterprise ready components BEFORE the application is being built.
Being service oriented (SO) is, and continues to prove itself to be, a valuable and elegant approach for applications as well as enterprise architectures. However I am a bit baffled by why an architecture which is service oriented is being branded a new kind of architecture. An architecture may or may not be service oriented but the underlying architecture exists separately.
My search for clarity started by trying to pin down a definition for Service Orientated Architecture.
The W3C's Web Service Architecture Working Group defines Web Service Architecture as an instance of an SOA in which "a service is viewed as an abstract notion that must be implemented by a concrete agent. The agent is the concrete entity (a piece of software) that sends and receives messages, while the service is the abstract set of functionality that is provided." Well that clears that up, doesn’t it?
Equally obscure but I believe more insightful is a definition from Rick Murphy in his article “Centers: The Architecture of Services and the Phenomenon of Life : Christopher Alexander's theory of centers helps explain the essential properties of service-oriented architectures” He invokes Christopher Alexander, one of the original thinkers on architecture, theory’s on centers. Mr Murphy defines SOA as “… services encapsulate the enduring mission of a virtual organization as centers of value. Services, as centers of value, are living structures we choreograph through a common grammar subject to fitness and evolution. Customer demand determines fitness based on service discovery and service description. Choreography defines the sequence and conditions under which services evolve.” I love the abstraction but I think we need a more readily consumable definition.
For that I turn to Ramkumar Kothandaraman from Microsoft. In his MSDN article “SOA Challenges: Entity Aggregation” Ramkumar defines SOA as having “several core design principles:
This works for me. A five point test I can apply to an implementation to gauge is level of SO-ness. Even so, these 5 points are possible characteristics of ANY implemented architecture. Personally, I would add Service Orientated as another style in Roy Fieldings’ dissertation on Architectural Styles and the Design of Network-based Software Architectures. In which he categorizes architectural approaches and weights their appropriateness to a target problem domain. A distributed system can be Layered, implement client caching, AND be service oriented. Considering one can maintain the basic constructs of SO in an embedded, layered, or distributed system it seems clear that the base architecture is not effected by the inclusion or exclusion of services.
I am a big fan of risk management in all projects. It is however generally rare to see risks identified when developing an enterprise architecture. On the one hand, this seems to make sense. Enterprise architectures are there to address the shared risks for others. They are the cure, not the cause. Unfortunately as cures go, the enterprise architecture is rarely a simple thing and as complexity increases so does the number of opportunities to inject risk.
Consider two primary goals of the Enterprise Architecture, safety and security. All services exposed by the enterprise architecture need to be highly available to the point of ubiquitousness. Failures must be accounted for in planning, minimized by design, and masked from the consumers during operation.
For example an enterprise wide collaboration service might include text messaging, virtual conferencing, and chat. On the surface these are fairly simple and individual user expectations are clear enough. They send a text string to a server and it re-broadcasts that string to the appropriate end points. The risk management plan would minimally address failure to service an inbound request and failure to reach any or all endpoints.
Alas we well know there is much more to it than that. The collaboration service needs to provide a simple facade while securely scaling as required. Load balancing, provisioning, application isolation, authentication and authorization are but a few of the critical yet invisible elements required to support the public interfaces. Each creating new risk opportunities. How will device failures be addressed? Can the business accept lost transmissions or partial text blocks? Will the sessions need to be persisted? If so for how long and how reliably? Addressing these risks adds layering to the architechture, we might consider a replicated repository for identity mnagement, and clustering becomes a real possibilitiy for the data stores.
The risk reduction enjoyed by consumers of a well desighned enterprise architecture come at a price born by the enterprise architecture. Managing these risks well should impact the form and function of the architecture as much, if not more than, the stated requirements coming from the applications. While a bit of a vicious cycle it reinforces the living and iterative nature of the enterprise architecture and the need for continuous evaluation.
I.
Most.
A common phrase used to describe an enterprise architecture is a set of “living documents”. I like it. Short, simple, easy to understand … a living document. Of course there is a dangerously complex implication lying just below the veneer of this simple metaphor. Living implies change and growth. It would seem to need to be feed and cared for. Most importantly it begins as an immature child and develops into a mature contributor to its society.
Those of us who interact with enterprise architectures are affected by the living document metaphor in different ways depending on our role and the maturity of the architecture.
You may be the parent, growing the architecture with new ideas, carefully cleaning away the little day to day dirt and grim that accumulates during the early draft phases. You are its protector, fending off hungry projects looking to gorge on its young unallocated funding. And you are its champion, fighting the good fight. You turn nay-sayers in to supporters with tales of possibilities and the obvious future value your enterprise architecture will provide..
You may be the doctor, having received an urgent request from the worried parents you make a house call in hopes of identifying and repairing an ailing architecture. Your years of experience, your deep understanding of how architectures work, and of course your calm bedside manner come together to reduce the fears and sooth away the pains. Your prescriptions are sometimes difficult to take and always come with warnings. Crisis averted you can only hope the parents will implement both recommended treatments and get you back for follow up visits. may be the teacher, seeing the untapped potential in the enterprise architecture; you carefully insert just the right information at the right time. While very young you provide stories of other successful architectures and their exploits. You paint colorful pictures of systems supporting terabytes of data, massive concurrency, very high transaction rates, and global user communities. These are the super heroes of the enterprise architecture world. You can but hope to instill a sense of the possible. Later you begin the monotonous lessons relating rigor, discipline, reviews, change management, and the build process. No glamour, just facts. Hoping to build knowledge on who’s shoulders understanding can later stand..
Eventually, the enterprise architecture matures, becoming a stable adult and valuable participant in the larger system within which it lives and works. Its boundaries and capabilities are well defined for all to see and interact with. Less reticent to see a doctor the enterprise invites scrutiny, takes all feedback (the good and the bad) and implements controlled change to integrate it. The prideful arrogance of youth has been replaced with an open team structure. Any and all projects are welcome to use the architecture and more importantly promote parts of themselves into the core architectural constructs.
With age comes rigorous process, clarity of definition, and actionable intent. Mature enterprise architectures have well defined and executed processes in place for continuous improvement. Communication is the cornerstone of the teams working with and on the enterprise architecture. Lessons learned, good and bad, are rapidly communicated to everyone in the enterprise through well known channels. Reviews are sufficient without being onerous on the participants. All in the architecture ambles along a well worn path in a predictable fashion … until the next generation comes on the scene. Then it all begins again.
|
http://blogs.msdn.com/stcohen/default.aspx
|
crawl-002
|
en
|
refinedweb
|
NAMEwcscasecmp - compare two wide-character strings, ignoring case
SYNOPSIS
#include <wchar.h> int wcscasecmp(const wchar_t *s1, const wchar_t *s2);
DESCRIPTIONThe wcscasecmp function is the wide-character equivalent of the strcasecmp function. It compares the wide-character string pointed to by s1 and the wide-character string pointed to by s2, ignoring case differences (towupper, towlower).
RETURN VALUEThe.
CONFORMING TOThis function is a GNU extension.
SEE ALSOstrcasecmp(3), wcscmp(3)
Important: Use the man command (% man) to see how a command is used on your particular computer.
>> Linux/Unix Command Library
|
http://linux.about.com/library/cmd/blcmdl3_wcscasecmp.htm
|
crawl-002
|
en
|
refinedweb
|
The abstraction provided by MembershipProvider base class is a very powerful concept in ASP.NET 2.0. It simplifies the management of user information by providing a neat layer.
ASP.NET 2.0 provides the SqlMembershipProvider class and the aspnet_regsql command line utility to create the database user management database. My customer was building a hybrid application, which was mainly ASP.NET 2.0, but there were a few forms implemented using Windows forms. The question arose as to how a windows forms login form can be created that will use the SqlMembershipProvider.
This is what I did.
<configSections> <section name="connectionStrings" type="System.Configuration.ConnectionStringsSection, System.Configuration, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" requirePermission="false" /> </configSections>
<connectionStrings> <add name="mydb" connectionString ="Data Source=localhost;Integrated Security=SSPI;Initial Catalog=Helloworld;"/> </connectionStrings>
class MembershipProviderFactory { static public MembershipProvider Create() { string cn=""; System.Configuration.Configuration config= ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None); if (config == null ) throw new ConfigurationErrorsException ("Could not read config file");
//searching for the connection string element with the zero th index. ConnectionStringsSection cnstringssection; cnstringssection = config.ConnectionStrings; if (cnstringssection.ConnectionStrings.Count == 0) throw new ConfigurationErrorsException("connection string information not found in config file");
ConnectionStringSettings cnsettings; cnsettings = cnstringssection.ConnectionStrings[0]; cn = cnsettings.ConnectionString; if (string.IsNullOrEmpty (cn)) { throw new ConfigurationErrorsException("connection string information (cn) not found in config file"); }
SqlMembershipProvider prov = new SqlMembershipProvider(); NameValueCollection vals=new NameValueCollection (); vals.Add("name", "sql"); vals.Add("connectionStringName", "mydb"); vals.Add("applicationName", "MyApplication"); vals.Add("maxInvalidPasswordAttempts", "100"); prov.Initialize("sql", vals); return (MembershipProvider)prov; }}
System.Web.Security.MembershipProvider memprov; memprov = MembershipProviderFactory.Create(); if (memprov.ValidateUser("user1", "pass@word1") == false ) { MessageBox.Show("Invalid username or password"); } else { MessageBox.Show("User credentials verified"); }
I do believe that this is not the best way to solve my customer's problem. A better way would be to expose web services for the required functionality and let the windows forms app connect to this service using WSe 3.0/WCF and passing the user name token along with the method invocation. This would be more secure because the connection string is not exposed to the caller and it would be n-tiered as well.
Have you tried using the default localsqlserver in the machine.config file of the asp machine?
var a = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None).ConnectionStrings as ConnectionStringsSection;
a.ConnectionStrings.Add(new ConnectionStringSettings("asd", "asdas"));
PingBack from
If you would like to receive an email when updates are made to this post, please register here
RSS
Trademarks |
Privacy Statement
|
http://blogs.msdn.com/saurabhd/archive/2006/08/02/685552.aspx
|
crawl-002
|
en
|
refinedweb
|
Note: This article has been updated to work and compile with the RTM version of the Windows SDK.
Welcome to the fourth part of my Windows Presentation Foundation tutorial! In this tutorial we’ll be building a plug-in system for sudoku-solving algorithms, delving deeper into custom controls, including deriving from exiting controls and implementing custom properties. Also, we’ll complete more of the UI and game logic to start turning this pile of unrelated code into a game! First, let’s look at building a plug-in system: There are essentially two ways of providing an extensibility system in a .NET application:
If we take the scripting approach we need to either create our own language, which is a lot of work and honestly, who likes to reinvent the wheel? We could also use the built-in compilation functionality in the .NET Framework to compiler, say C# code as a script, which takes less work but doesn’t allow us to create a “sandbox” language that can only perform certain actions. Plug-in modules suffer from the same issues, but they don’t require us to provide a set of tools for plug-in developers since Visual Studio is all that’s needed. With the availability of the free Visual Studio Express editions, there is really no excuse not to use precompiled modules not to mention that it’s less work and therefore less chance of creating bugs or security problems…at least that’s what I tell my boss. There’s only one flaw in this method: most of the time standard applications like our Sudoku game run as so-called “full trust” applications that means they have the full permissions of the user they are running under, which in most cases, at least on Windows XP, is the administrator. Obviously this raises a security concern. Most likely, the user trusts the Sudoku game itself, since they knowingly installed it on their system but do they trust each individual plug-in? What if, for example, the program could automatically download solver plug-ins from a central web repository; does the user trust this plug-in? One way around this is to use .NET code access security (CAS). We can partition our application into a set of “application domains” You probably don’t know it yet, but you’ve already used domains. When a .NET process starts a default domain is created. You can even access it from any of your “normal” code through the Current static property of the AppDomain class. We could just load more assemblies into the default domain but they would then run with same permissions as our application code:
This is bad for two reasons: first, the plug-in can do anything the application can. For example, if the application is allowed to write to a certain registry location by the operating system, even if the plug-in can’t break out of the Win32 limits it can still trash our game’s settings, which is bad. Second, because there is no clear segregation of code, the CLR errs on the side of caution and doesn’t allow us to unload the plug-in module since it’s impossible to reasonably determine if any portions of it are still in use. If we use the second approach:
The plug-in is sandboxed inside another domain. The only real caveat of this approach is that communication between domains requires the use of remoting, or in other words, the objects need to “self-contained” so that they can be serialized as they cross the boundary, but we’ll get back to that later.
At this point you’re probably saying “Application domains! .NET remoting! This isn’t what I signed on for! I want to make a game!” Don’t worry, most of this is automatic but it’s important to understand what’s going on so you can debug your code and so you can avoid creating security flaws. So, how do we go about actually getting this to work? First, we need to define a programming interface for plug-ins to use, to keep it simple, let’s build an interface that is exported from our SudokuFX.exe assembly that plug-in authors are required to implement:
namespace SudokuFX{ public interface ISudokuSolver { string Name { get; } string Description { get; } string Author { get; } bool Solve(Board board); }}
This would work well in the single domain approach but unfortunately it’s not ideal for our situation. First, passing an instance of Board, while seemingly convenient, actually adds more complexity because of the serialization that occurs as an object crosses the domain boundary. This is a problem because the object contains links to event handlers and is databound to elements in the UI so it can’t be easily written to self-contained stream. Also, it’s generally a bad idea to expose unnecessary internal data to plug-in anyway. If instead of passing a Board, we say pass an ref int?[,], a simple type that exposes no implementation details can easily be serialized. Also, in order to use an object through remoting it must contain some boilerplate code, an easy way of adding this code is to derive from the MarshalByRef class. Ideally we also want to hide this detail from plug-in authors so it’s actually better to make an abstract class like this:
public abstract class SudokuSolver : MarshalByRefObject{ public abstract string Name { get; } public abstract string Description { get; } public abstract string Author { get; } public abstract bool Solve(ref int?[,] board);}
That way, all plug-in classes automatically inherit the code. Wait no! STOP! It’s likely I’ve convinced you that this is the best way of doing it, but it’s not! Building something like this requires that you understand the security implications of what you’re writing. There is a subtle flaw in what I’ve just described, which I fell into the trap of the first time I wrote this code: In order to directly interact with another object through an app domain, the caller must load the other assembly to process its types. If a hostile plug-in writer embedded malicious code in the constructor of a class that is used in a static member, this code would be run when the assembly is loaded in our main domain! Instead we should build a proxy object that is defined in our main assembly, which can sit in the un-trusted domain and provide a layer between our objects. This way the un-trusted assembly will not be loaded in our full trust domain:
public class SudokuSolverContainer : MarshalByRefObject, ISudokuSolver{ ISudokuSolver solver; public void Init(Type t) { solver = Activator.CreateInstance(t) as ISudokuSolver; } public string Name { get { return solver.Name; } } public string Description { get { return solver.Description; } } public string Author { get { return solver.Author; } } public bool Solve(ref int?[,] board) { return solver.Solve(ref board); }}
Now, we can create our container object as our main assembly’s “ambassador” to the least-privilege domain, then call the Init method with the type of solver we want to create. This way, calls to the plug-in are still straightforward but since the class from the plug-in assembly is never accessed directly from our trusted domain, there is no need to load the assembly there.
Next, let’s start building a plug-in we can use to test our system:
In Visual Studio, right-click on the SudokuFX solution and add a new class library project called “SampleSolver”. Now, in the new project, right-click on references and select “add”. In the “projects” tab select “SudokuFX”. Also, we need to add a post-build event to the new project to copy the dll into our folder: copy "$(TargetPath)" "$(SolutionDir)SudokuFx\bin\Debug". Now, if we define a class that inherits from SudokuFX.SudokuSolver we can start building a plug-in:
namespace SampleSolver{ public class SampleSolver : SudokuFX.ISudokuSolver { public string Name { get { return "Sample Sudoku Solver"; } } public string Description { get { return "This is a sample algorithm that uses a combination " + "of logic and guess-and-check."; } } public string Author { get { return "Lucas Magder"; } } public bool Solve(ref int?[,] board) { //Do stuff return true; } }}
Visual Studio will even set up the blank methods for you if you right-click on the “SudokuFX.SudokuSolver” and select “Implement Interface”. This is great if you’re still not 100% on how to correctly set things up. I’ll get back to how exactly my plug-in works in just a second but lets jump ahead and write some code to load it.
First, I’ve added a new field to the Window1 class:
AppDomain PluginDomain = null;
It’s important we keep a reference to our new domain because, like any other object, it can get garbage collected and if that happens all the objects in the domain die, and that’s bad. Next, I added a method to load a plug-in:
SudokuSolver LoadSolver(string path){ AppDomainSetup ads = new AppDomainSetup(); ads.ApplicationBase = AppDomain.CurrentDomain.BaseDirectory; PermissionSet ps = new PermissionSet(null); ps.AddPermission(new SecurityPermission(SecurityPermissionFlag.Execution)); PluginDomain = AppDomain.CreateDomain("New AD", null, ads, ps);
Here, we first create an AppDomainSetup, which contains the parameters for our new domain. Right now, the only field we need to worry about is the ApplicationBase, this makes sure the loader code, which runs in the new domain can locate our executable to resolve the references our plug-in contains to it. Next, we create a new empty permission set and add only one permission to it: execution. This means the code in that domain can run but do nothing else, which means no file access, no registry, no network access, etc. Finally, we create the new domain and store a reference to it in our new field. Now, it’s time to load our DLL:
FileStream stream = new FileStream(path,FileMode.Open,FileAccess.Read); byte[] assemblyData = new byte[stream.Length]; stream.Read(assemblyData,0,(int)stream.Length); stream.Close(); Assembly asm = PluginDomain.Load(assemblyData);
First, we read our plug-in from disk it’s important to note that were are not “loading” the assembly at this point. No initialization code executes and no processing of the data is done we are merely reading the bytes from disk. We need to do this because the module loader itself runs in our new domain, which if you recall, doesn’t have access to the file system, not even to load its own code. The actual module loading occurs in the new domain from our buffer, so there is no chance of any code “escaping” Finally, we search the assembly using reflection to find any classes that implement ISudokuSolver:
Type[] ts = asm.GetTypes(); foreach (Type t in ts) { if (Array.IndexOf(t.GetInterfaces(),typeof(ISudokuSolver)) != -1) { Type containter = typeof(SudokuSolverContainer); SudokuSolverContainer ssc = ad.CreateInstanceAndUnwrap( containter.Assembly.FullName,containter.FullName) as SudokuSolverContainer; ssc.Init(t); return ssc; } } return null;}
The Type class is another way of representing the type of an object. Basically, imagine if you replaced all references to a type, say ArrayList, with a string variable, which you set to “ArrayList”, then at run-time you could change the string to, for example, “HashTable”, now all the code the used ArrayLists now uses HashTables, in a sense, you’ve made the type of variable, a variable itself. Ok, so if your head hasn’t exploded yet, no imagine that instead of string you used an instance of Type. Why? Well first of all string comparisons are bad form for things like this and second, Type contains a plethora of useful stuff that allows you to do things like loop through the methods of a class or inspect the inheritance tree. How do you get Types you say? Well it’s simple you just use the typeof keyword to extract it from your class. Here we search for a type that implements ISudokuSolver, then when we find one, we create an instance of it and return it. By default, creating an instance across a domain boundary creates an opaque proxy object that allows other domains to deal with types they don’t reference. Because our code does have access to the SudokuSolver base class we can unwrap the proxy to a transparent proxy. Finally, I added a new field to Window1 to contain the solver, which I then load in the Loaded event handler. Then to make it work, I added a button to the timer pane, labeled “I Give Up”, which executes the solver like this:
int?[,] a = Board.GameBoard.ToArray();if (!Solver.Solve(ref a)){ MessageBox.Show("No Solution!");}else{ Board.GameBoard.FromArray(a);}
Pretty straightforward eh? The proxy object does all the work of crossing the app domain. You can see that this works, if you try to put an offending line of code in the plug-in, execution stops immediately:
I also added a new board generation algorithm. Really, the one I wrote last tutorial is pretty bad. It generates lots of unsolvable grids, which if you’re strict, isn’t allowed. The real way of generating a Sudoku board is to generate a full valid board then blank out cells. To accomplish this I’ve added a new method to the Board class to generate a board based on a solver. Basically, it seeds the solver by filling in a single random cell with a random number.
Random rnd = new Random();int row = rnd.Next(size);int col = rnd.Next(size);this[row, col].Value = rnd.Next(size)+1;int?[,] a = this.ToArray();s.Solve(ref a);this.FromArray(a);
Then, it runs the solver to fill the rest of the grid, which it then selectively blanks. This takes a little longer (depending on the solver) than the default method so I also added a radio button to choose the algorithm:
Ok, that’s all well and good but how did I go about writing a Sudoku solving algorithm? Well my algorithm is a recursive one. First, I build a new board structure of List<int>s that contain the possible number at each square, so for example, on givens, the list is 1 item long and contains the given number, whereas on blank squares it contains all possible numbers. Then the algorithm looks for all column, row, and box conflicts that eliminate numbers from lists until there are no more items that can be definitely removed. Then, the solver finds the shortest list with length greater than 1 in the grid and picks and random starting point in its list. Then for each possible “guess” it performs a deep copy of the board and recurses, this way it can backtrack if a certain guess results in no solution. When all guesses have been tried and no solution is found then the algorithm returns false. Alternatively, if the board is full, e.g. each list is exactly 1 long and there are no conflicts then the board is solved. You can check the code out in the download but I’ll be the first to tell you that this algorithm sucks, at least for solving the boards the program generates. I hereby challenge you to write a better one - That’s the great about supporting plug-ins, it allows you to offload functionality onto the user….err, I mean include extensibility.
Ok, so now that’s working, let’s start getting the game more playable. First we need to implement a timer, we could do this in two ways: a) create our own threaded or polling timer code or b) use the built-in WPF animation system….guess which one I’m going to cover (it really is easier, I promise). First, we need to redefine our timer display to more easily accept a bound input source:
<StackPanel Orientation ="Horizontal" FlowDirection ="LeftToRight"> <TextBlock x: </TextBlock> <TextBlock FontSize="36" FontWeight="Bold" Text =":"/> <TextBlock x: </TextBlock> <TextBlock Margin="0,5,0,0" VerticalAlignment="Center" FontSize="24" FontWeight="Bold" Text =":"/> <TextBlock Margin="0,5,0,0" VerticalAlignment="Center" x: </TextBlock></StackPanel>
We also need define the storyboard that will control our animation, since we don’t want the timer to associate with a particular event or control we can define it as “free floating” by putting in our <Window.Resources> tag (we’ll write the Completed event handler later):
<Storyboard x: <Int32Animation From ="1" To ="0" Storyboard. <Int32Animation From ="59" To ="0" RepeatBehavior="Forever" Storyboard. <Int32Animation From ="59" To ="0" RepeatBehavior="Forever" Storyboard.</Storyboard>
There are a couple of important things to notice here: First, how come the durations of the second counter and the and 1/60th of a second time are only one second and one minute respectively? Well, the storyboard expands to fit the longest animation it contains. With RepeatBehavior="Forever" each animation repeats indefinitely, if we instead specify a duration, like we will in code later, then the animation will repeat within the overal duration, or in other words, if the total timer runs for 10 minutes then the second number animation will run 10 times since it counts down each second for a single minute. Second, why do the animations target the Tag property intead of Text? We have to do this because, the Text property is of type string, which cant be animated using an Int32Animation (there is no StringAnimation, in fact, could you even make one?) To make this work we can define the TextBlocks like so:
<TextBlock x: <TextBlock.Tag> <s:Int32>59</s:Int32> </TextBlock.Tag></TextBlock>
This way the control automatically displays the contents of its tag, which we initialize to an integer. Next, lets a pause button. Since this a WPF tutorial, a normal button just doesn’t cut it so let’s build a custom toggle button:
The needle in the stopwatch should animate as the timer advances, so we’ll also implement a custom dependency property that allows the needle to be animated and databound. First, add a new user control the project named “Stopwatch.xaml”, now in the .xaml and .cs files that compose the control replace the UserControl class with the ToggleButton type, this causes our control to inherit from ToggleButton instead of UserControl. Next, let’s start by defining our custom property, to do this we need to define a property description object as a static member of our class:
public static readonly DependencyProperty CurrentTimeProperty = DependencyProperty.Register("CurrentTime", typeof(double), typeof(Stopwatch), new FrameworkPropertyMetadata(0.0, FrameworkPropertyMetadataOptions.AffectsRender ));
When calling the DependencyProperty.Register method to create our descriptor we specify the name of our property (“CurrentTime”), the type it will contain ( a double), what type it is attached to (out new Stopwatch type), it’s default value (a double set to 0), and finally any extra flags. In this case we want our control to be redrawn when the property changes so we include the AffectsRender flag. This is all well and good but we still cant access the property from C# code, to make this work we also need to define a matching instance property:
public double CurrentTime{ get { return (double)this.GetValue(CurrentTimeProperty); } set { this.SetValue(CurrentTimeProperty, value); }}
This property just wraps our WPF property in a more C# friendly package. We don’t actually require this in this particular instance, but its good practice to automatically define these properties in pairs since it avoids some cryptic “why doesn’t this work?!” situations down the road if you forget. As for the control itself, its really just a custom template, in fact if you want to get technical, we could have built this entire control as a ToggleButton style, storing the current time in the Tag property, but that’s no fun! To make the control itself work, I’ve added triggers to alter the top button image displayed (the lit one uses a another bitmap effect called OuterGlowBitmapEffect) and to “clunk” the control when it is clicked:
<Trigger Property ="ToggleButton.IsChecked" Value="True"> <Setter TargetName ="OffLight" Property="Visibility" Value="Hidden"/> <Setter TargetName ="OnLight" Property="Visibility" Value="Visible"/></Trigger><Trigger Property ="ToggleButton.IsPressed" Value="True"> <Setter TargetName="MainGrid" Property="RenderTransform"> <Setter.Value> <TranslateTransform X ="1" Y ="1"/> </Setter.Value> </Setter></Trigger>
Because we derive from ToggleButton, we can use the existing control logic and properties. I’ll skip over most the actual XAML that defines the look of the control since you can find it in the download and we’ve covered doing basic shapes and gradients before, so the only relevant part is the definition of the needle, which is part of a larger drawing:
<GeometryDrawing Brush ="Red"> <GeometryDrawing.Geometry> <PathGeometry> <PathGeometry.Figures> <PathFigure IsClosed ="True" IsFilled ="True" StartPoint ="50,40"> <LineSegment Point ="51,66"/> <LineSegment Point ="49,66"/> </PathFigure> </PathGeometry.Figures> <PathGeometry.Transform> <RotateTransform CenterX ="50" CenterY ="66" Angle ="{Binding Path=CurrentTime, RelativeSource={RelativeSource TemplatedParent}, Converter={StaticResource AngleConverter}}"/> </PathGeometry.Transform> </PathGeometry> </GeometryDrawing.Geometry></GeometryDrawing>
You can see here that the Angle property of our transform is bound to the custom CurrentTime property we defined. If you’ve been paying attention so far you’ll notice that the convention in WPF is for angles to be specified in degrees (0-360) while, completion is usually specified as a double ranging from 0-1.0. How can we multiply our CurrentTime value by 360 in the process of binding it? The answer is converters. By defining s custom converter you can perform any operation on the values as they are bound. Since we just want to multiply by a number we can use the follow converter:
[ValueConversion(typeof(double), typeof(double))]public class AngleConverter : IValueConverter{ public object Convert(object value, Type targetType, object parameter, CultureInfo culture) { return (double)value * 360; } public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture) { return (double)value / 360; }}
Then, we just instantiate our AngleConverter class in our resources section and reference it from the binding as done above. Converters also support parameters so if we really wanted to be slick we could supply the multiplier as a parameter to a generic multiplication converter.
Just like the board control, I’ve placed the stopwatch on the main window, this time under the timer numbering:
<clr:Stopwatch
I’ve also hooked up some of standard ToggleButton events to make the button actually do something. This works transparently, again because our control derives from ToggleButton. Also added is a new animation in our timer storyboard to animate the stopwatch needle:
<DoubleAnimation From ="0" To ="1" Storyboard.
And finally a new set of radio buttons to select the timer length:
Now, in the New Game button click handler, we need to add code to start the timer:
Storyboard s = this.Resources["TimerAnimation"] as Storyboard;s.Stop(this);
First, we get the storyboard out of the window’s resources section and stop it if it’s already running. Next, we either disable and ghost out the related control or setup the timer:
if (NoTimerRadio.IsChecked == true){ MinNumber.Tag = 59; SecNumber.Tag = 59; SubSecNumber.Tag = 59; TimerControls.Opacity = 0.25; TimerControls.IsEnabled = false; StopwatchControl.IsChecked = false;}else{ Int32 length; if (EasyTimerRadio.IsChecked == true) { length = 15; } else if (MediumTimerRadio.IsChecked == true) { length = 10; } else { length = 5; } //the stopwatch controller s.Children[0].Duration = new Duration(TimeSpan.FromMinutes(length)); //the minute ticker s.Children[1].Duration = new Duration(TimeSpan.FromMinutes(length)); //the second ticker s.Children[2].RepeatBehavior = new RepeatBehavior(TimeSpan.FromMinutes(length)); //the 1/60 second ticker s.Children[3].RepeatBehavior = new RepeatBehavior(TimeSpan.FromMinutes(length)); ((Int32Animation)s.Children[1]).From = length - 1; StopwatchControl.IsChecked = true; MinNumber.Tag = length - 1; TimerControls.Opacity = 1; TimerControls.IsEnabled = true; s.Begin(this, true);}Board.IsEnabled = true;
And finally we begin the storyboard, setting the main window as its parent control. Then we need to define the event handler to handle the timer completion:
void TimerDone(object sender, EventArgs e){ TimerControls.Opacity = 0.25; TimerControls.IsEnabled = false; StopwatchControl.IsChecked = false; if (Board.GameBoard.IsFull && Board.GameBoard.IsValid) { MessageBox.Show("You win!"); } else { MessageBox.Show("You ran out of time! Better luck next time."); }}
And the handlers for the pressed events of the stopwatch:
void PauseTimer(object sender, RoutedEventArgs e){ Storyboard s = this.Resources["TimerAnimation"] as Storyboard; s.Pause(this); Board.IsEnabled = false;}void ResumeTimer(object sender, RoutedEventArgs e){ Storyboard s = this.Resources["TimerAnimation"] as Storyboard; s.Resume(this); Board.IsEnabled = true;}
Since the storyboard supports pausing and resuming it’s dead simple! After adding some extra housekeeping code, an implementing a super-basic save game system using the built-in serialization functions in the .NET Framework to write the array representation of the game board to a file we now have a working Sudoku game! This code is included in the download so if you’re still not a .NET pro (hey, it’s ok!) you can check it out. Don’t forget to come back next time for the 5th and final article in the series were we finish off the app and sand off all the rough edges. I’ll be covering spiffy stuff like:
See you next time!
If you would like to receive an email when updates are made to this post, please register here
RSS
where are the sources of this project?
Source code is on the download link at the top of the page.
Sudoku challenge:
Part 1:
Part 2:
Part 3:
Part 4:
Part 5:
|
http://blogs.msdn.com/coding4fun/archive/2006/11/30/1178206.aspx
|
crawl-002
|
en
|
refinedweb
|
Steve Ballmer announced VSTO support for Outlook add-ins in his Tech-Ed keynote today. This is exciting news--VSTO Outlook add-ins solve all the problems people have encountered while trying to build managed COM add-ins for Outlook using IDTExtensibility2. For example:
1) VSTO Outlook add-ins load into their own AppDomain--there is no longer a need for you to create a custom C++ shim.2) VSTO Outlook add-ins solve the "Outlook won't shut down" issue--you no longer have to track every Outlook object you use and set it to null and force a garbage collection or even worse call ReleaseCOMObject on anything. Your add-in will always cleanly shut down.3) VSTO Outlook add-ins solve the "Trust all installed templates and add-ins" issue. Because the VSTO Outlook add-in loader technology uses the same runtime and security model used by VSTO, if your add-in is trusted by .NET policy, it will work even when Trust all installed templates and add-ins is unchecked.4) The VSTO Outlook add-in project is all wired up and ready to go. Create a new project, press F5, and Outlook starts up. No extra settings to configure.5) VSTO Outlook add-ins have a nice strongly typed programming model that is very similar to the VSTO Word & Excel programming models. Say goodbye to the old COM-centric IDTExtensibility2 interface. The Outlook add-in project has a code item called ThisApplication.cs or ThisApplication.vb where you write your code. Instead of being passed a weakly typed application object, the ThisApplication class you write your code derives from a base class that wraps the Outlook application object so you can use "this" in C# and "me" in VB to get to Outlook Application properties, methods, and events. Here what code in a VSTO Outlook add-in looks like:
using Systemusing System.Windows.Forms;using Microsoft.VisualStudio.Tools.Applications.Runtime;using Outlook = Microsoft.Office.Interop.Outlook;
namespace OutlookAddin1{public partial class ThisApplication{ private void ThisApplication_Startup(object sender, System.EventArgs e) { MessageBox.Show(String.Format( "There are {0} inspectors and {1} explorers open.", this.Inspectors.Count, this.Explorers.Count)); this.NewMail += new Outlook.ApplicationEvents_11_NewMailEventHandler( ThisApplication_NewMail); } void ThisApplication_NewMail() { MessageBox.Show("New mail!"); } private void ThisApplication_Shutdown(object sender, System.EventArgs e) { MessageBox.Show("Shutting down."); } #region VSTO generated code private void InternalStartup() { this.Startup += new System.EventHandler(ThisApplication_Startup); this.Shutdown += new System.EventHandler(ThisApplication_Shutdown); } #endregion }}
Download VSTO support for Outlook now!
This installer installs VSTO support for Outlook on top existing VSTO Beta 2 .
Trademarks |
Privacy Statement
|
http://blogs.msdn.com/eric_carter/archive/2005/06/06/423986.aspx
|
crawl-002
|
en
|
refinedweb
|
KDEUI
KAboutApplicationDialog Class ReferenceStandard "About Application" dialog box. More...
#include <kaboutapplicationdialogdialog.h.
Constructor & Destructor Documentation
Constructor.
Creates a fully featured "About Application" dialog box.
- Parameters:
-
Definition at line 55 of file kaboutapplicationdialog.cpp.
Definition at line 250 of file kaboutapplicationdialog.cpp.
The documentation for this class was generated from the following files:
|
http://api.kde.org/4.x-api/kdelibs-apidocs/kdeui/html/classKAboutApplicationDialog.html
|
crawl-002
|
en
|
refinedweb
|
- FAQ Topic - What does the future hold for ECMAScript? (2008-11-22)
- Does removing an element also remove its event listeners?
- Re: Muuttujien leveydet C++:ssa?
- Update location.hash without adding to history?
- FAQ Topic - Internationalisation and Multinationalisation in javascript. (2008-11-21)
- pass function into another function as parameter?
- Two weirdnesses..are they related? (IE7)
- cross domain XHR
- OnComm event with JavaScript doesn't fire
- onclick only works once
- Can you suggest a better way of Reporting Errors?
- Dragging something in JS
- JAVAScript Public Key Encryption
- hiding javascript function call from status bar.
- 3rd party page access from JavaScript. Is this possible?
- Chinese antique
- Definition/Standard for DOM node property offsetParent
- Copy Clipboard
- Side-effect only requests
- FAQ Topic - What is the document object model? (2008-11-20)
- Update labels
- Haskell functions for Javascript
- Unknown Errors Use of getBoxObjectFor() is deprecated. Try to useelement.getBoundingClientRect() if possible.
- Encrypted code with certificate
- Where's the Window()
- IE6 memory leak - very fiddly
- native code attached to onblur/onfocus event handler
- Help on onchange event for refreshing the page
- FAQ Topic - What are object models? (2008-11-19)
- Create Login page
- check boxes again
- Re: comp.lang.javascript FAQ - Quick Answers 2008-11-17
- declare variables document.write()
- frames and back action
- Re: How to implement a mask of visibility/invisibility of a set of<div> elements?
- Hide/Show Divs
- having difficulty calling my functions (directly not threw events)
- Javascript onSubmit
- FAQ Topic - What is JScript? (2008-11-18)
- Link within a div that has onclick
- An oddity when clicking checkbox
- Write to xml
- ng2000 keeps spamming newsgroups
- noobslide help please
- Do X if element is Y
- kiddy question about newsticker snippet
- FAQ Version 11
- Escape .(dot) in a Regular Expression
- Simple Ajax
- How to exit a form validation function so that the form isn'tsubmitted
- FAQ Topic - How do I generate a random integer from 1 to N? (2008-11-16)
- Trying to create an array?
- on-anchor-click?
- [jQuery] Why does img_width always return 0 (zero)?
- print pdf using javascript
- Modify code from random to sequence
- FAQ Topic - Why does 1+1 equal 11? or How do I convert a string to a number? (2008-11-15)
- Jlint another script validation problem
- Validating Javascript function using JLint
- Javascript and DIV popup help...
- When to minify?
- Display of the image from JavaScript
- Compare string
- Inheritance Chain
- FAQ Topic - Why does K = parseInt('09') set K to 0? (2008-11-14)
- How to assign event handler in css?
- Noob Q: Different ways to run code in script tags
- mouseout and checkboxes
- Writing to popups
- Passing event as parameter to dynamic function
- How do I create a function that copies all the fields?
- Closure
- Augmenting functions
- FAQ Topic - Why does simple decimal arithmetic give strange results? (2008-11-13)
- weird var problem
- unescape() and escape styles question
- temporarily draw freehand on a page using javascript
- invalid flag after regular expression
- Href Links in Dynamic table
- JavaScript / ECMAScript
- You cowboys were right
- Re: n00b questions for javascript!
- Re: n00b questions for javascript!
- local/global scope confusion
- Feature detection vs browser detection
- Google Toolbar autofill doesn't fire change events
- when popup window content loaded
- FAQ Topic - How do I format a Number as a String with exactly 2 decimal places? (2008-11-12)
- What does Google Calendar's grid uses?
- stupid question
- eval, alternative
- ajax to html
- Unhide text using a radio button
- Events
- FAQ Topic - What online resources are available? (2008-11-11)
- Re: comp.lang.javascript FAQ - Quick Answers 2008-11-10
- Image creation and 'on load' behavior
- IE7 Javascript Errors
- ie7 and prototype windows
- charCodeAt
- DocType impact on javascript
- Am I using 'this' to often?
- Embedded <divs> with events: How to prevent the parent div's eventfrom being fired when the embedded div's event is fired?
- Nokia 5310 XpressMusic Mobile Phones
- Dr. Stephen R.Covey LIVE! in India in Jan 2009
- Will MS adopt WebKit?
- FAQ Topic - What books cover javascript? (2008-11-10)
- variables and ajax
- "Back" no actions in Firefox
- unable to validate syntax need help ASAP
- EECP treatment - No Bypass Surgery
- Pass onmouseover event to the element underneath
- FAQ Topic - What does the future hold for ECMAScript? (2008-11-09)
- newbie: constants in JavaScript
- Is it Possible to Programmatically Customizing Firefox3 Settings?
- Hidden Forms
- show/hide problem with explorer 7
- Local jawascript search to search pages from only my website
- document.domain problem
- FAQ Topic - How can old comp.lang.javascript articles be accessed? (2008-11-08)
- Dynamically add frames to frameset
- SaveAs Command
- How can I display the content literally without any change?
- about document.image1
- call function from within another
- MS08-045 - Cumulative Security Update for Internet Explorer (953838)and frame location
- FAQ Topic - Internationalisation and Multinationalisation in javascript. (2008-11-07)
- Measuring recurring elapsed time
- =?windows-1256?B?x9/I0SDjzOPm2skg1ebRIN3kx+THyiDa0cjtxw==?==?windows-1256?B?yiDax9Htx8ogPz8/?=
- Re: Full / Part Time Jobs
- focus listener on non-form elements in Safari/Chrome
- changing specific <div> status
- Change images onclick
- Javascipt Image effects
- retrieving HTML text
- get text from dom element
- FAQ Topic - What is the document object model? (2008-11-06)
- iFrame issue
- Dojo v. Crockford re privates
- Updated Conventions Document
- ajax run 2 scripts?
- popup window
- FAQ Topic - What are object models? (2008-11-05)
- svg in firefox
- pass javascript in xmlHttp.responseText
- How to create a textarea dependant on flag in javascript
- Possible to introspect a function & parameters ?
- =?windows-1256?B?x9vK1cfIIN3sIMfh1MfR2iDH48fjIMfh5MfTLi4=?== ?windows-1256?B?1ebRLi4gLi4u?=
- Tooltip box
- javascript onclick "save as" - firefox
- Seeking to defeat auto-fill
- switch or select case and code inside it
- Passing a lot of data
- FAQ Topic - What is JScript? (2008-11-04)
- 'new' operator for built-in types?
- Looping to populate selections for IE & Firefox
- pls help w/unusual code.. (YUI/JSON)
- Activating toolbar item from javascript
- how to not hide a division
- Parsing XML with namespaces in IE.
- Quick Question
- Function( confusion
- calling a function with parameters packed in array
- sorting a textarea.
- FAQ Topic - How do I generate a random integer from 1 to N? (2008-11-02)
- dynamically causing file browser to appear
- FAQ Topic - Why does 1+1 equal 11? or How do I convert a string to a number? (2008-11-01)
- JavaScript Convention Documents
- 80 columns wide? 132 columns wide?
- Refreshing parent page from a child page opened as a modal dialog box
- SpiderMonkey Multithreading String Issues
- Closure code to assign image index to onload handler
- Generated JS in Google's Mobile Talkgadget
- input checkbox onclick not working via DOM on IE7, FF, WebKit
- Passing variable from function to html body...
- Skipping OnBeforeUnload event
- Standalone Javascript interpreter for Linux?
- FAQ Topic - Why does K = parseInt('09') set K to 0? (2008-10-31)
- input checkbox onchange not working on IE7
- =?=
- testing if date is in past
- Opening a stream in Word with JS
- 2 dimensional array - sorting mechanism
- ie div onclick problem
- Re: comp.lang.javascript FAQ - META 2008-10-29
- Good visual javascript aide?
- FAQ Topic - Why does simple decimal arithmetic give strange results? (2008-10-30)
- Please Assist With Submission to Remote Iframe
- simply super
- Script works only with firebug installed, or in non-mozilla
- About (function(){})()
- Failing rollover image
- a href with static and dynamic content using JavaScript
- not override onload
- frame collection versus gEBI
- FAQ Topic - How do I convert a Number into a String with exactly 2 decimal places? (2008-10-29)
- UL LI get text
- Direct file download
- Re: Passing on variables to a bank shopping cart
- Check/Uncheck all checkboxes with name as 'name[]'
- newbie: how to set a breakpoint
- change position of alert()-is it possible?
- Help Jquery: unable to register a ready function
- How to check all the checkboxes if checkbox name is 'name[]'
- Writing an XML document via Javascript.
- Reading data from user-submitted XML file.
- FAQ Topic - What online resources are available? (2008-10-28)
- style.cursor on IE
- Jquery not registering the ready func
- keydown listener for div element and "event forwarding"
- Data persistence and refresh
- FunctionExpression's and memory consumptions
- FAQ Topic - What books cover javascript? (2008-10-27)
- JavaScript Math vs Excel
- ==?=
- Best practices for error handling
- shopping cart
- Formatting the clipboard
- Webkit Javascript Application in c++
- Need help for javascript/webkit
- Pages doesn't load properly until mouse movement
- Unsafe Names for HTML Form Controls
- general function who activate callback on every object - please help
- show/hide any division
- FAQ Topic - What does the future hold for ECMAScript? (2008-10-26)
- can't get clientid of .net label in a JS file
- How to add array elements to parent window from child window
- FAQ Topic - I have a question that is not answered in here or in any of the resources mentioned here but I'm sure it has been answered in comp.lang.javascript. Where are the archives located? (2008-10-25)
- Need assistance reinventing the wheel.....
- Javascript Not Working on Page
- New Widget
- Cross browser event issues
- Re: IE 7 Zoom Problem
- Is a closure's scope accessible by untrusted code?
- e.layerX problem on Macintosh browsers
- JSON array problem
- FAQ Topic - Internationalisation and Multinationalisation in javascript. (2008-10-24)
- Removing all but first item in drop down list
- Sparkline Graphs no longer work in FireFox
- Multi column Listbox
- earn on line
- Do people think this is logical behavior from the string split method?
- Re: I need assistance using callback
- UnLoad out of body
- Decoding html pages
- Events problems
- I need assistance using callback
- Re: How to pass a parameter via an image.onload function call?
- Re: How to pass a parameter via an image.onload function call?
- Re: How to pass a parameter via an image.onload function call?
- Motorola Mobile Technology
- JSDB?
- Galileo RIA Toolkit
- FAQ Topic - What is the document object model? (2008-10-23)
- save/restore screen area
- Possible to tell when a hidden field's value changes
- onsubmit called even when cancel button hit in onbeforeunload
- Manipulating a textarea
- want to move images in Y direction
- Dojo or what ?
- Re: Debug Request
- Re: Debug Request
- FAQ Topic - What are object models? (2008-10-22)
- json multidimensional array
- How can server interrupt client in browser?
- Advanced Core Javascript
- select onchange an many values
- Re: IE 7 Zoom Problem
- <! ... v. <!-- ...
- gridView row selected
- Crockford's new video: "Web Forward"
- Permission denied... parent page and <iframe>
- FAQ Topic - What is JScript? (2008-10-21)
- FAQ 5.40, Ajax and GET
- How to use DOM to check a checkbox
- IE Error "Stop Running This Script"
- CSS has bigger pixels than canvas?
- get Table cell value
- AJAX + delay
- Re: Whats the point in these groups?
- Yahoo! UI Library
- HTML or JavaScript
- Download an XML file from the server
- keycode or keypress button
- Elegance is an attitude
- Two windows: one for IE7 and the other for FF3
- FAQ Topic - What is ECMAScript? (2008-10-19)
- Re: Whats the point in these groups?
- Re: Script not accepting some values
- Re: Whats the point in these groups?
- cross-domain
- Convert state of radiobutton => byte on send
- FAQ Topic - Why was my post not answered? (2008-10-18)
- Re: comp.lang.javascript FAQ - META 2008-10-15
- Re: Whats the point in these groups?
- Get Array Name
- Passing an Array to a form field
- Re: Whats the point in these groups?
- Encodings of javascript
- How to completely destroy a script and make it disappear forever.
- why my javascript for menu doesn't work properly? (sth wrong withonMouseOut event)
- JESUS in the QURAN
- !!!...Who is Jesus
- Why did I Embrace Islam (( Scientific Miracles in the Quran
- Synchronous dynamic script loading
- Re: JavaScript/HTML Developer position in Sunnyvale CA Bay area
- FAQ Topic - What should I do before posting to comp.lang.javascript? (2008-10-17)
- Can this be done with Javascript? Microcontrollers/TCP IP
- Update function variable on same page
- Update function variable on same page
- Create Dynamic Arrays
- Rendering HTML prior parsing Javascript
- Re: Caching icons in browser
- Problem(s) on number auto-format function.
- Plus sign not shown
- FAQ Topic - What questions are on-topic for CLJ? (2008-10-16)
- Javascript & Captcha
- onchange event in select element with keyboard scrolling in IE6
- freewebs
- eecp in india
- Filtering or avoiding double click on form submission
- FAQ Topic - Which newsgroups deal with javascript? (2008-10-15)
- How to use a variable name in a regex?
- Strange RegExp problem
- JavaScript Image Popup's
- ppk on JavaScript
- Moving objects in Javascript question
- unique reference id
- Re: Split Array
- Unescape escapeXML text
- What do you use instead of enum?
- FAQ Topic - How do I generate a random integer from 1 to N? (2008-10-14)
- Re: comp.lang.javascript FAQ - Quick Answers 2008-10-13
- how to load client-side xml file?
- Refreshing page properly on back button
- Is it possible to use Image Map with Lightbox ?
- Re: Air Max Air Max 90 man Air Max 90 women Air Max LTD man Air MaxLTD women Air
- conflict with latest windows update
- FAQ Topic - Why does 1+1 equal 11? or How do I convert a string to a number? (2008-10-13)
- textbox id is not defined error for firefox, works fine in IE
- Variables for JavaScript artificial intelligence
- innerHTML
- Re: Air Max Air Max 90 man Air Max 90 women Air Max LTD man Air MaxLTD women Air
- toggle node with checkbox
- Auto spell check problem
- If you think Kenny is a jerk...
- how does one get a set of items that are in one table but not inanother?
- FAQ Topic - Why does K = parseInt('09') set K to 0? (2008-10-12)
- JS/DHTML Grid Product - Favorites?
- Overriding .textContent of BR elements
- comp.lang.python u see website get some doller
- Prototype WTP 0.2 released,this release for Prototype 1.6.0
- XML Parsing Problem in Internet Explorer
- updating data in object
- script wanted not standard tab menu thing
- FAQ Topic - Why does simple decimal arithmetic give strange results? (2008-10-11)
- Re: Moving dynamically created table rows up and down in an HTMLtable
- FAQ Sections - Feedback Wanted
- ready function too late?
- Closures Explained
- Find smallest distance between numbers in Array
- love dating and romance with hot modeles live u just click
- How can I made this
- Highlight block A elements
- dynamic display
- FAQ Topic - How do I convert a Number into a String with exactly 2 decimal places? (2008-10-10)
- How is unit testing performed at your location?
- hrefs and javascript, navigate and other beasties
- Html Listbox Question
- Inherit Date Class?
- Are there factors that may prevent script jittering?
- FAQ Topic - What online resources are available? (2008-10-09)
- Microsoft + jQuery = Really cool Websites?
- open MS Access databse in Firefox
- make array empty
- SWFUpload class problem
- comp.lang.javascript FAQ - META 2008-10-08
- JavaScript Examples
- autosuggest key events
- FAQ Topic - What books cover EcmaScript? (2008-10-08)
- dyn-drive calendar script not working..
- select onchange with typing in Webkit
- IE 7 Zoom Problem
- IE6 Form and Its Properties After Submit
- Modifying a variable in a forEach loop
- Finding out how many checkboxes are in a table
- Giving a window an onblur event
- Resetting GET query values
- autosuggest key controls
- EcmaScript, ECMAScript, or JavaScript ?
- FAQ Topic - What does the future hold for EcmaScript? (2008-10-07)
- Change element opacity in Firefox
- Get rid of 'eval'
- maintain opener property after refresh/reload of popup
- Create Element in Firefox
- Read entire xml file
- As
- Places that are hiring in my area
- FAQ Topic - I have a question that is not answered in here or in any of the resources mentioned here but I'm sure it has been answered in clj. Where are the clj archives located? (2008-10-06)
- why would sortable.create not fire onUpdate?
- Basic array stuff
- FAQ Maintainer - Bart?
- Memory Leaks, createElement, and Form Controls
- how to get the privous URL of the browser using javascript
- FAQ Topic - What is the document object model? (2008-10-05)
- setTimeout(this.doAnimation, 1000) will fail if it is defined in aclass definition
- innerHTML = JavaScript object
- The new operator
- FAQ Topic - What are object models? (2008-10-04)
- FAQ Noise
- Changes to FAQ
- Get all elements in firefox
- Getting option values as array
- Firefox get shapes problem.
- Firefox2 problem with dynamic rendering (it's fine in IE, Safariand FF3)
- Set Encoding Dynamically in IE6 Instead of Enctype
- closures in JavaScript
- Hooking into page's onsubmit event from code within page?
- FAQ Topic - What is JScript? (2008-10-03)
- make a date pretty
- id names are global variables?
- Set global variable from within a Class
- Trying to pass a variable to xmlHttp.onreadystatechange
- Where to find this script code? or similar?
- Text wrap problem firefox
- Nice websites here123
- Post file higher than public_html folder
- FAQ Topic - What is ECMAScript? (2008-10-02)
- FAQ Clean-Up
- AJAX refresh
- selectedIndex not functioning properly..?..
- Offer programmer for our site - work with dojo and dijit
- Best implementation of setTimeout / clearTimeout
- Toggling the SurroundContents method
- comp.lang.javascript FAQ - META 2008-10-01
- FAQ Topic - How do I direct someone to this FAQ? (2008-10-01)
- Visit this sites please
- Override asynchronous (XMLHttpRequest) activity?
- Tapestry template exception
- Re: Partial evaluation in JavaScript
- FAQ Topic - Why was my post not answered? (2008-09-30)
- Load Body html on button click.
- Javascript does not run in intranet zone?
- javascript to execute java program
- Javascript about to be disabled event
- execute the radio and selective button and return the message on thesame page
- Detecting status change in a checkbox
- setting window size and php
- FAQ Topic - What do I have to do before posting to clj? (2008-09-29)
- Is jQuery worth a second look?
- Cursor position in textarea? (XY pos, not caret index)
- Re: JavaScript / AJAX RIA Developer Job Opening (SYS-CON Media)
- javascript+safari
- Re: Variable scope problem
- [beginer]Why this script failed to work?
- BLOCK POPUPS
- Linking external page
- JavaScript Animation Issues...
- Probably A Simple Answer...Text Display...
- Displaying Ajax data and using timers properly
- FAQ Topic - What questions are off-topic for clj? (2008-09-28)
- Handling DOM event on element not on top
- How to change color of <select>?
- FAQ Topic - Which newsgroups deal with javascript? (2008-09-27)
- Re: Javascript project seeking competent scripter/coder
- Re: Netscape Communications Corp.'s JavaScript language
- Abbreviate Currency using Javascript
- Open Source project needs quick help from regexp expert: XML Namevalidation
- Javascript on Nokia phones
- Print Screen and open in paint
- "Access is denied" in IE on node.focus()
- FAQ Topic - Why is my AJAX page not updated properly when using an HTTP GET request in Internet Explorer? (2008-09-26)
- onclick behaves differently when defined via javascript
- Replace Line breaks
- Logarithmic scale
- Cisco Tries to Break Out of the Data Center Role
- Can Javascript work with Multiple lines of Text.
- script debug errors
- Prevent scrolling
- need some newbie help...
- Dynamically generated functions with variable-based payload
- Use of \n in strings
- FAQ Topic - What is AJAX? (2008-09-25)
- Java Applet<-->JavaScript communication (Firefox 3).
- functions and arguments.length; passing unknown number of arguments
- Obtaining query_string from JavaScript
- Re: Bust my code!
- .removeChild Not Working...
- .removeChild Not Working...
- comp.lang.javascript FAQ - META 2008-09-24
- closure question
- FAQ Topic - How do I get my browser to report javascript errors? (2008-09-24)
- Dynamic forms and functions
- KB-Traversal in JavaScript AI
- Re: Bust my code!
- Re: Bust my code!
- Unselect one listbox when second is selected
- Re: Bust my code!
- Elementos de un array
- Need some JavaScript puzzles
- OT: problems with orkut.com
- inputbox
- Unexpected clearing of radio button
- FAQ Topic - How do I open a new window with javascript? (2008-09-23)
- java script for "post a comment "
- Declaring public variables inside a function
- Why does JavaScript Regx behave weirdly?
- Groping around in the DOM
- Javascript events image load
- cross domain scripting problem in IE6 (not IE7)
- Changing case in a sentence to Capitalize Case.
- Newbie needs help with setInterval
- Factors that may prevent jittering?
- FAQ Topic - Why doesn't the global variable "divId" always refer to the element with id="divId"? (2008-09-22)
- good news
- Re: ECMAScript Secure Transform. My idea, i think...
- Re: ECMAScript Secure Transform. My idea, i think...
- Cant find proper method or idea how to...
- Adding & Removing Form Options
- dual scripts problem
- FAQ Topic - When should I use eval? (2008-09-21)
- JavaScript implementation
- validating two forms
- SFX : SquirrelFish eXtreme.
- tricky javascript situation
- FAQ Topic - How do I access a property of an object using a string? (2008-09-20)
- Errors in walking through XML
- this keyword and closures
- system sound
- JavaScript programmer at Earth.org
- How to empty a node
- Javascript and working with dates
- gwt, qooxdoo - framework choice for a C++ && apache application
- qooxdoo || gwt for a apache + c++ module web application
- gwt, qooxdoo - framework choice for a C++ && apache application
- RungeKutta.js
- If Else format
- RegEx noob question
- Can the code be simplified?
- FAQ Topic - How do I download a page to a variable? (2008-09-19)
- RegExp.test() with global flag set
- JavaScript Progress bar is timing out.
|
http://bytes.com/sitemap/f-63.html
|
crawl-002
|
en
|
refinedweb
|
Matplotlib issue on OS X ("ImportError: cannot import name _thread")
At some point in the last few days, Matplotlib stopped working for me on OS X. Here's the error I get when trying to
import matplotlib:
Traceback (most recent call last): File "/my/path/to/script/my_script.py", line 15, in <module> import matplotlib.pyplot as plt File "/Library/Python/2.7/site-packages/matplotlib/pyplot.py", line 34, in <module> from matplotlib.figure import Figure, figaspect File "/Library/Python/2.7/site-packages/matplotlib/figure.py", line 40, in <module> from matplotlib.axes import Axes, SubplotBase, subplot_class_factory File "/Library/Python/2.7/site-packages/matplotlib/axes/__init__.py", line 4, in <module> from ._subplots import * File "/Library/Python/2.7/site-packages/matplotlib/axes/_subplots.py", line 10, in <module> from matplotlib.axes._axes import Axes File "/Library/Python/2.7/site-packages/matplotlib/axes/_axes.py", line 22, in <module> import matplotlib.dates as _ # <-registers a date unit converter File "/Library/Python/2.7/site-packages/matplotlib/dates.py", line 126, in <module> from dateutil.rrule import (rrule, MO, TU, WE, TH, FR, SA, SU, YEARLY, File "/Library/Python/2.7/site-packages/dateutil/rrule.py", line 14, in <module> from six.moves import _thread ImportError: cannot import name _thread
The only system change I can think of was the Apple-forced NTP update and maybe some permission changes I did in /usr/local to get Brew working again.
I tried reinstalling both Matplotlib and Python-dateutil via Pip, but this did not help. Also tried a reboot. I'm running Python 2.7.6, which is located in /usr/bin/python. I'm running Yosemite (OS X 10.10.1).
sudo pip uninstall python-dateutil sudo pip install python-dateutil==2.2
I had the same error message this afternoon as well, although I did recently upgrade to Yosemite. I'm not totally sure I understand why reverting dateutil to a previous version works for me, but since running the above I'm having no trouble (I generally use pyplot inline in an ipython notebook).
From: stackoverflow.com/q/27630114
|
https://python-decompiler.com/article/2014-12/matplotlib-issue-on-os-x-importerror-cannot-import-name-thread
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
Learn more about Scribd Membership
Discover everything Scribd has to offer, including books and audiobooks from major publishers.
Road mapStep 1: Write the code for a heap using ArrayLists.Heres some sample code to start with. Assume that the generic type E implementsthe comparable interface.// class MinBinaryHeap can be instantiated for a specific type.// Example: MinBinaryHeap<int> h = new MinBinaryHeap<int>();public class MinBinaryHeap<E extends Comparable<E> {/**array list to store a heap.Arraylist is preferred to an array since it can expanddynamically.**/private ArrayList<E> heapArray;MinBinaryHeap() {heapArray = new ArrayList(E)();}private bubbleUp( ) { } ;private bubbleDown() {};public void insert(E value) { } ;public void getMin() {};
}}
(ii)
Sample input:Process[10] proc = new Process[10];procs[0] = new Process(2, 12, MS word);proc[1] = new Process(1, 10, Powerpoint);proc[2] = new Process(12, 10, Skype);proc[3] = new Process(5, 10, Chrome);Sample output:OrderProcess idProcess Name11Powerpoint212Skype35Chrome42MS Word
FAQs:(1)What is the comparable interface?Any class that implements the methods from the comparable interfacesupports the compareTo method. Using the compareTo method, youcan compare two objects of the same class. Example: use a priority queue, not just a sort for processscheduling?In an operating system (OS), the list of processes is a dynamic list. Newprocesses keep getting added all the time. Hence, it is not possible topre-sort the list of all processes and then start scheduling. The OS hasto dynamically keep adding the processes to priority queues in adynamic manner. In our project we are assume that the list ofprocesses is known. However, our goal is to play with heaps not withother sorting algorithms. If that doesnt convince you, how about:because the instructor said so and he will be grading this project.(3)My output doesnt match the sample output exactly.
Much more than documents.
Discover everything Scribd has to offer, including books and audiobooks from major publishers.Cancel anytime.
|
https://www.scribd.com/document/267168720/HeapProject-1-docx
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
Retrieve the current value of the specified event property of type void*.
#include <screen/screen.h>
int screen_get_event_property_pv(screen_event_t ev, int pname, void **param)
The handle of the event whose property is being queried. The event must have an event type of Screen event types.
The name of the property whose value is being queried. The properties available for query are of type Screen property types.
The buffer where the retrieved value(s) will be stored. This buffer must be of type void*.
Function Type: Immediate Execution
This function stores the current value of an event property in a user-provided buffer. The list of properties that can be queried per event type are listed as follows:
0 if a query was successful and the value(s) of the property are stored in param, or -1 if an error occurred (errno is set).
|
http://www.qnx.com/developers/docs/qnxcar2/topic/com.qnx.doc.qnxcar2.screen/topic/screen_get_event_property_pv.html
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
Return Oriented Programming on ARM (32-bit)
Anirudh
Originally published at
icyphox.sh
on
・5 min read
Before we start anything, you’re expected to know the basics of ARM assembly to follow along. I highly recommendAzeria’s series on ARM Assembly Basics. Once you’re comfortable with it, proceed with the next bit — environment setup.
Setup
Since we’re working with the ARM architecture, there are two options to go forth with:
- Emulate — head over to qemu.org/download and install QEMU. And then download and extract the ARMv6 Debian Stretch image from one of the links here. The scripts found inside should be self-explanatory.
- Use actual ARM hardware, like an RPi.
For debugging and disassembling, we’ll be using plain old
gdb, but you may use
radare2, IDA or anything else, really. All of which can be trivially installed.
And for the sake of simplicity, disable ASLR:
$ echo 0 > /proc/sys/kernel/randomize_va_space
Finally, the binary we’ll be using in this exercise is Billy Ellis’roplevel2.
Compile it:
$ gcc roplevel2.c -o rop2
With that out of the way, here’s a quick run down of what ROP actually is.
A primer on ROP
ROP or Return Oriented Programming is a modern exploitation technique that’s used to bypass protections like the NX bit (no-execute bit) and code sigining. In essence, no code in the binary is actually modified and the entire exploit is crafted out of pre-existing artifacts within the binary, known as gadgets.
A gadget is essentially a small sequence of code (instructions), ending with a
ret, or a return instruction. In our case, since we’re dealing with ARM code, there is no
ret instruction but rather a
pop {pc} or a
bx lr. These gadgets are chained together by jumping (returning) from one onto the other to form what’s called as a ropchain. At the end of a ropchain, there’s generally a call to
system(), to acheive code execution.
In practice, the process of executing a ropchain is something like this:
- confirm the existence of a stack-based buffer overflow
- identify the offset at which the instruction pointer gets overwritten
- locate the addresses of the gadgets you wish to use
- craft your input keeping in mind the stack’s layout, and chain the addresses of your gadgets
LiveOverflow has a beautiful video where he explains ROP using “weird machines”. Check it out, it might be just what you needed for that “aha!” moment :)
Still don’t get it? Don’t fret, we’ll look at actual exploit code in a bit and hopefully that should put things into perspective.
Exploring our binary
Start by running it, and entering any arbitrary string. On entering a fairly large string, say, “A” × 20, we see a segmentation fault occur.
Now, open it up in
gdb and look at the functions inside it.
There are three functions that are of importance here,
main,
winner and
gadget. Disassembling the
main function:
We see a buffer of 16 bytes being created (
sub sp, sp, #16), and some calls to
puts()/
printf() and
scanf(). Looks like
winner and
gadget are never actually called.
Disassembling the
gadget function:
This is fairly simple, the stack is being initialized by
pushing
{r11}, which is also the frame pointer (
fp). What’s interesting is the
pop {r0, pc}instruction in the middle. This is a gadget.
We can use this to control what goes into
r0 and
pc. Unlike in x86 where arguments to functions are passed on the stack, in ARM the registers
r0 to
r3are used for this. So this gadget effectively allows us to pass arguments to functions using
r0, and subsequently jumping to them by passing its address in
pc. Neat.
Moving on to the disassembly of the
winner function:
Here, we see a calls to
puts(),
system() and finally,
exit(). So our end goal here is to, quite obviously, execute code via the
system()function.
Now that we have an overview of what’s in the binary, let’s formulate a method of exploitation by messing around with inputs.
Messing around with inputs :)
Back to
gdb, hit
r to run and pass in a patterned input, like in the screenshot.
We hit a segfault because of invalid memory at address
0x46464646. Notice the
pc has been overwritten with our input. So we smashed the stack alright, but more importantly, it’s at the letter ‘F’.
Since we know the offset at which the
pc gets overwritten, we can now control program execution flow. Let’s try jumping to the
winner function.
Disassemble
winner again using
disas winner and note down the offset of the second instruction —
add r11, sp, #4. For this, we’ll use Python to print our input string replacing
FFFF with the address of
winner. Note the endianness.
$ python -c 'print("AAAABBBBCCCCDDDDEEEE\x28\x05\x01\x00")' | ./rop2
The reason we don’t jump to the first instruction is because we want to control the stack ourselves. If we allow
push {rll, lr} (first instruction) to occur, the program will
popthose out after
winner is done executing and we will no longer control where it jumps to.
So that didn’t do much, just prints out a string “Nothing much here…”. But it does however, contain
system(). Which somehow needs to be populated with an argument to do what we want (run a command, execute a shell, etc.).
To do that, we’ll follow a multi-step process:
gadget, again the 2nd instruction. This will
pop
r0and
pc.
- Push our command to be executed, say “
/bin/sh” onto the stack. This will go into
r0.
- Then, push the address of
system(). And this will go into
pc.
The pseudo-code is something like this:
string = AAAABBBBCCCCDDDDEEEE gadget = # addr of gadget binsh = # addr of /bin/sh system = # addr of system() print(string + gadget + binsh + system)
Clean and mean.
The exploit
To write the exploit, we’ll use Python and the absolute godsend of a library —
struct. It allows us to pack the bytes of addresses to the endianness of our choice. It probably does a lot more, but who cares.
Let’s start by fetching the address of
/bin/sh. In
gdb, set a breakpoint at
main, hit
r to run, and search the entire address space for the string “
/bin/sh”:
(gdb) find &system, +9999999, "/bin/sh"
One hit at
0xb6f85588. The addresses of
gadget and
system() can be found from the disassmblies from earlier. Here’s the final exploit code:
import struct binsh = struct.pack("I", 0xb6f85588) string = "AAAABBBBCCCCDDDDEEEE" gadget = struct.pack("I", 0x00010550) system = struct.pack("I", 0x00010538) print(string + gadget + binsh + system)
Honestly, not too far off from our pseudo-code :)
Let’s see it in action:
Notice that it doesn’t work the first time, and this is because
/bin/sh terminates when the pipe closes, since there’s no input coming in from STDIN. To get around this, we use
cat(1) which allows us to relay input through it to the shell. Nifty trick.
Conclusion
This was a fairly basic challenge, with everything laid out conveniently. Actual ropchaining is a little more involved, with a lot more gadgets to be chained to acheive code execution.
Hopefully, I’ll get around to writing about heap exploitation on ARM too. That’s all for now.
|
https://dev.to/icyphox/return-oriented-programming-on-arm-32-bit-436m
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
From: Daniel Wallin (dalwan01_at_[hidden])
Date: 2003-11-14 15:46:44
David Abrahams wrote:
> Daniel Wallin <dalwan01_at_[hidden]> writes:
>
>
>>>I had this idea some time ago, but Peter Dimov pointed out that it
>>>doesn't really prevent accidental ADL problems:
>>> namespace my {
>>> class X {};
>>> template <class T>
>>> garbage clone(T, X& x)
>>> {
>>> // not the same meaning for "clone"
>>> }
>>> }
>>> lib::some_generic_algorithm(my::X());
>>
>>I might be mistaken, but that shouldn't compile should it? This should
>>fail due to ambiguity, which isn't really an ADL problem in this case
>>since my::clone() wasn't meant to be found.
>
>
> I beg to differ. If it fails accidentally due to ambiguity it's
> definitely a problem that wouldn't have happened without ADL. In some
> cases there may not be a way to prevent the wrong clone from causing
> ambiguity without modifying lib, which may be out of your control.
Ok, you are right. The library could allow some control over what is
passed to clone() though, to control ADL.
struct myXref
{
myXref(const lib_i_dont_control::X& x)
: ref(x)
{}
const lib_i_dont_control::X& ref;
};
lib_i_dont_control::X* clone(clone_factory_tag, myXref x)
{
return x.ref.clone();
}
lib::some_generic_algorithm(my::X(), type<myXref>());
Maybe not so nice..
-- Daniel Wallin
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
|
https://lists.boost.org/Archives/boost/2003/11/56234.php
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
Learn more about Scribd Membership
Discover everything Scribd has to offer, including books and audiobooks from major publishers.
5 Decision-Making Constructs and Loop Constructs .6 Events and Delegates .7 Stream class to implement File handling .8 Collections .9 Threading .10 Serialization: .11 Exception handling .13 Introduction to .NET Framework TOP ................6
.......................................8 TOP ........................14 ..............................16 TOP ......................19 ................................21 TOP ......................23 TOP ......................24 TOP ..............................28 ......................31
WEBPAGES ..............................................................................................31 Directives..............................................................................................................................32 .12 Creating a Website ..........................34 ..................................................................39 .13 Hierarchy of .config files TOP .......................39
.14 Configuring Server Controls............................................................................42 .15 Creating Master pages to establish a common layout for a web application ...........................................................................................................................55 .16 Constructing Web Application using Web Parts .17 Site Navigation .18 ASP.NET State Management & Security TOP .....................64
.19 Windows Forms & Controls............................................................................77 .20 Crystal Report .21 Components .22 Asynchronous Programming .23 Web Services ..............................79 TOP .....................83 ..............................85 ..................................88
.24 WCF (Windows Communication Foundation) .25 Remote Client and Server Application .26 Message Queuing .27 Message Queuing in Client and Server applications .28 Web Services Enhancement (WSE) .29 LINQ (Language-Integrated Query)
TOP ..........................95 TOP ......................100 TOP ....................105 TOP .......................106 TOP ......................108 TOP .......................111
.30 XML.............................................................................................................118 .31 XML Schema TOP .........................................................................................................................120 .32 XSLT (Extensible Style sheet Language Transformations) TOP .33 Books .34 XPath Patterns .35 XML Reader .36 XML Writer .37 DOM API for XML .38 ADO.NET .39 Connected and Disconnected Architecture .40 Store and Retrieve Binary Data using ADO.NET .41 Performing Bulk Copy Operations in ADO.NET .42 Manage Distributed Transactions .43 Caching ....................122
TOP......................123 ...............................126 TOP ......................127 TOP .......................129 TOP .......................131 TOP ......................139 TOP .....................147 TOP.........................148 TOP ..........................154 TOP .....................157 TOP ....................159
TOP
Chapter IObject-oriented Programming using C Sharp
A class is an expanded concept of a data structure: instead of holding only data, it can hold both data and functions.An object is an instantiation of a class. In terms of variables, a class would be the type, and an object would
be the variable. Class is the general thing and object is the specialization of general thing. For example if Human can be a class and linto, raj etc are names of persons which can be considered as object.Encapsulation Encapsulation is one of the fundamental principles of object-oriented programming. It is a process of hiding all the internal details of an object from the outside world. Encapsulation is the ability to hide its data and methods from outside the world and only expose data and methods that are required.
Example:Chair has some length, breath and height state and we can sit on chair, chair legs can be break, u can move it are the examples of behavior. State and behavior are encapsulated inside Object , that is called EncapsulationAbstraction Abstraction is the representation of only the essential features of an object and hiding unessential features of an object. Through Abstraction all relevant data can be hide in order to reduce complexity and increase efficiency. It is simplifying complex reality by modeling classes appropriate to the problem .
Example:People own savings accounts, checking accounts, credit accounts, investment accounts, but not generic bank accounts. In this case, a bank account can be an abstract class and all the other specialized bank accounts inherit from bank account.Difference between Encapsulation and Abstraction 1. Abstraction solves the problem in the design 1. Encapsulation solves the problem in the level implementation level 2. Abstraction is used for hiding the unwanted data and giving relevant data 2. Encapsulation means hiding the code and data in to a single unit to protect the data from outside world
3. Abstraction is a technique that helps to 3. Encapsulation is the technique for packaging the identify which specific information should be information in such a way as to hide what should be visible and which information should be hidden. hidden, and make visible what is intended to be visible.
PolymorphismExample:Car is a Polymorphism because it have different speed as display in same speedometer.
Inheritance#.
Human | ____|_____ || 3
// Men Women Here Human is general class and Men and Women are sub classes of general thing. That is Human class contains all features general and Man and Women contains features specific to their own class.TOP
.3 C# Data TypesTOP
sbyte: Holds 8-bit signed integers. The s in sbyte stands for signed, meaning that the variable's value can be either positive or negative. . byte: Holds 8-bit unsigned integers. Unlike sbyte variables, byte variables are not signed and 4
can only hold positive numbers.; uint: Holds 32-bit unsigned integers. The u in uint stands for unsigned. The smallest possible value of a uint variable is 0; long: Holds 64-bit signed integers. The smallest possible value of a long variable is 9,223,372,036,854,775,808; ulong: Holds 64-bit unsigned integers. The u in ulong stands for unsigned. The smallest possible value of a ulong variable is 0; char: Holds 16-bit Unicode characters. The smallest possible value of a char variable is the Unicode character whose value is 0; float: Holds a 32-bit signed floating-point value. The smallest possible value of a float type is approximately 1.5 times 10 to the 45th power; double: Holds a 64-bit signed floating-point value. The smallest possible value of a double is approximately 5 times 10 to the 324th; decimal: Holds a 128-bit signed floating-point value. Variables of type decimal are good for financial calculations. bool : Holds one of two possible values, true or false. Secondary Data TypeStructure Struct describes a value type in the C# language. Structs can improve speed and memory usage. They cannot be used the same as classes. Collections of structs are allocated together. struct Simple { public int Position; public bool Exists; public double LastValue; }; Difference between Structures and Class
You cannot have instance Field initializers in structs. But classes can have. Classes can have explicit parameterless constructors. But structs cannot have explicitparameterless constructors. Enumerations.
Arrays In C#, an array index starts at zero. That means the first item of an array starts at the 0 th position. The position of the last item on an array will total number of items - 1. So if an array has 10 items, the last 10 th item is at 9thposition.[Fixed Array]. int[] intArray; intArray = new int[5];
.4 OperatorsTOP
Arithmetic Operator
They used to perform the general arithmetic operations. Operator + * / % Addition Subtraction Multiplication Division Gets the remainder left after integer division Description
Operator ++x --y x++ y-~ Pre_Increment Pre_Decrement Post_Increment Post_Decrement Works by flipping bits
Description
Assignment Operator The assignment operators assign a new value to a variable, a property, an event, or an indexer element.
Operator = *= /= %= += -= <<= >>= &= ^= |= Basic assignment operator Compound assignment operator Compound assignment operator Compound assignment operator Compound assignment operator Compound assignment operator Compound assignment operator Compound assignment operator Compound assignment operator Compound assignment operator Compound assignment operator
Operator
less than operator less than or equal to operator grater than operator grater than or equal to perator equal to operator not equal to operator
Logical operators Based on the value of the variable they are used to make the decision on which loop to pass through.
It decides which action to be taken based on user input and external condition. The IF Block The if block is the powerhouse of conditional logic, able to evaluate any combination of conditions and deal with multiple and different pieces of data.
{ int x1 = 50; int x2 = 100; if(x1>x2) { Console.WriteLine("X1 is greater"); } else { Console.WriteLine("X2 is greater"); } } OUTPUT X2 is greater The switch block The switch statement selects for execution a statement list having an associated switch label that corresponds to the value of the switch expression. int caseSwitch = 1; switch (caseSwitch) { case 1: Console.WriteLine("Case 1"); break; case 2: Console.WriteLine("Case 2"); break;
default:
Console.WriteLine("Default case");
break; }
OUTPUT Case 1 Loop Constructs The while loop The while loop is probably the most simple one, so we will start with that. The while loop simply executes a block of code as long as the condition you give it is true. Lets see this with an example. static void Main(string[] args) { int number = 0; while (number < 5) { Console.WriteLine(number); number = number + 1; } Console.ReadLine(); } OUTPUT 0 1 2 3 4
The do loop The do loop evaluates the condition after the loop has executed, which makes sure that the code block is always executed at least once. Lets see this with an example.
10
int number = 0; do { Console.WriteLine(number); number = number + 1; } while (number < 4); OUTPUT 0 1 2 3 The for loop The for loop is preferred when you know how many iterations you want, either because you know the exact amount of iterations, or because you have a variable containing the amount. Here is an example on the for loop. int number = 5; for (int i = 0; i < number; i++) Console.WriteLine(i); Console.ReadLine(); OUTPUT 0 1 2 3 4 The foreach loop The foreach loop operates on collections of items, for instance arrays or other built-in list types. In our example we will use one of the simple lists, called an ArrayList. using System.Collections; ArrayList list = new ArrayList(); list. Add("John Doe"); list. Add("Jane Doe");
11
list. Add("Someone Else"); foreach (string name in list) Console.WriteLine(name); Console.ReadLine(); OUTPUT John Doe Jane Doe Someone Else
A sample C# programSelect File->New->Project. From the dialog box select Visual C# and then click on Console Application. With the help of Browse button choose the required location and click OK. Type the below given code in the code file. The console application will have the extension .cs . using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace ConsoleApplication1 { public class first
12
Output: Welcome
You want to read user input from the console in your C# program in the simplest way possible. The Console.ReadLine method in the System namespace in the .NET Framework allows you to read a string of characters.Console.WriteLine
You are writing a console program using the C# language and want to write text or data to the screen. This is best done with Console.WriteLine
Interfaces:Interfaces in C# are provided as a replacement of multiple inheritance. Because C# does not support multiple inheritance, it was necessary to incorporate some other method so that the class can inherit the behavior of more than one class.using System; namespace ConsoleApplication1 {
public class Mammal { protected string Characteristis; public string characteristics { get { return this.Characteristis; } set { this.Characteristis=value; } } } interface IIntelligence { /// Interface method declaration bool intelligent_behavior(); }
13(); } } }
Output:Whale are mammal Human are mammals and have intelligence
Difference between abstraction and interfaces: An abstract class is a class that can not be instantiated but that can contain code. An interface only contains method definitions but does not contain any code. With aninterface, you need to implement all the methods defined in the interface.. A delegate is a class that can hold a reference to a method. Unlike other classes, a delegate class has a signature, and it can hold references only to methods that match its signature. A delegate is thus equivalent to a type-safe function pointer or a callback. A delegate declaration is sufficient to define a delegate class. The
14
declaration supplies the signature of the delegate, and the common language runtime provides the implementation. The following example shows an event delegate declaration. Delegates: Example: // Declaration public delegate void SimpleDelegate(); class Program { public static void MyFunc() { Console.WriteLine("I was called by delegate ..."); } static void Main(string[] args) { // Instantiation SimpleDelegate simpleDelegate = new SimpleDelegate(MyFunc);
// Invocation simpleDelegate(); } } OUTPUT: I was called by delegate .... Events: namespace ConsoleApplication2 { public delegate void MyDelegate(); // delegate declaration public interface I { event MyDelegate MyEvent; void FireAway(); } public class MyClass : I { public event MyDelegate MyEvent; public void FireAway()
15
public class MainClass { static private void f() { Console.WriteLine("This is called when the event fires."); } static public void Main() { I i = new MyClass(); i.MyEvent += new MyDelegate(f); i.FireAway(); } } } OUTPUT: This is called when the event fires.
StreamWriter Class The StreamWriter class in inherited from the abstract class TextWriter. The TextWriter class represents a writer, which can write a series of characters. The following table describes some of the methods used by StreamWriter class.
Methods
16
Closes the current StreamWriter object and the underlying stream Clears all buffers for the current writer and causes any buffered data to be written to the underlying stream Writes to the stream Writes data specified by the overloaded parameters, followed by end of line(); }
17
Enter the text which you want to write to the file Press Enter The text will be replaced in the document StreamReader Class The StreamReader class is inherited from the abstract class TextReader. The TextReader class represents a reader, which can read series of characters. The following table describes some methods of the StreamReader class.
Description Closes the object of StreamReader class and the underlying stream, and release any system resources associated with the reader Returns the next available character but doesn't consume it Reads the next character or the next set of characters from the stream Reads a line of characters from the current stream and returns data as a string Allows the read/write position to be moved to any position with the file(); }
18
Console.ReadLine(); sr.Close(); fs.Close(); } } static void Main(string[] args) { FileRead wr = new FileRead(); wr.ReadData(); } } }
.8 CollectionsTOP
The .NET Framework has powerful support for Collections. Collections are enumerable data structures that can be assessed using indexes or keys. The System.Collections namespace The System.Collections namespace provides a lot of classes, methods and properties to interact with the varying data structures that are supported by it. Let us see the few examples of Collections. ArrayList The ArrayList class is a dynamic array of heterogeneous objects. Note that in an array we can store only objects of the same type. In an ArrayList, however, we can have different type of objects; these in turn would be stored as object type only. We can have an ArrayList object that stores integer, float, string, etc., but all these objects would only be stored as object typeusing System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Collections; namespace Array_list { class Program { static void Main(string[] args) { int i = 100;
19
OutputJoydip 100 20.5
StringCollection The StringCollection class implements the IList interface and is like an ArrayList of strings. The following code example shows how we can work with a StringCollection class.using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Collections; using System.Collections.Specialized; namespace Array_list { class Program { static void Main(string[] args) { StringCollection stringList = new StringCollection(); stringList.Add("Manashi"); stringList.Add("Joydip"); stringList.Add("Jini");
20
{ Console.WriteLine(str); } Console.ReadLine(); } } }
Output:
.9 ThreadingOverview Mult. Features and Benefits of Threads.
Threading Concepts in C. How do they work.
21
Working with threads In .NET framework, System.Threading namespace provides classes and interfaces that enable multi-threaded programming. This namespace provides: ThreadPool class for managing group of threads, Timer class to enable calling of delegates after a certain amount of time, A Mutex class for synchronizing mutually exclusive threads, along with classes for scheduling the threads, Information on this namespace is available in the help documentations in the Framework SDKusing System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading; namespace Threading { class Program { public void InstanceMethod() { Main(string[] args) {"); Console.ReadLine(); } } }
OutPut: You are in StaticMethod. Running on Thread B Thread B Going to Sleep Zzzzzzzz
22
.10 Serialization:TOP
Many application need to store or transfer objects. To make these tasks as simple as possible, the .net framework includes serialization techniques. Serialization is a process where an object is converted to a form able to be stored or transported to a different place. Such purpose can be if we want to save our object's data to a hard drive for example. Even more useful is its appliance when we want to send our object over some kind of network. Without serialization the remoting will be impossible); }
23
.11TOP
Exception handling
Exception handling is an in built mechanism in .NET framework to detect and handle run time errors. The .NET framework contains lots of standard exceptions. The exceptions are anomalies that occur during the execution of a program. They can be because of user, logic or system errors. If a user (programmer) do not provide a mechanism to handle these anomalies, the .NET run time environment provide a default mechanism, which terminates the program execution. C# provides three keywords try, catch and finally to do exception handling. The try encloses the statements that might throw an exception whereas catch handles an exception if one exists. The finally can be used for doing any clean up process..If any exception occurs inside the try block, the control transfers to the appropriate catch block and later to the finally But in C#, both catch and finally blocks are optional. The try block can exist either with one or more catch blocks or a finally block or with both catch and finally blocks. If there is no exception occurred inside the try block, the control directly transfers to finally block. We can say that the statements inside the finally block is executed always. Note that it is an error to transfer control out of a finally block by using break, continue, return or goto.
A try block can throw multiple exceptions, which can handle by using multiple catch blocks. The throw statement throws an exception. A throw statement with an expression throws the exception produced by evaluating the expression. A throw statement with no expression is used in the catch block. It re-throws the exception that is currently being handled by the using System;using System.Collections.Generic; using System.Linq;
using System.Text; namespace exception_handling { class Program { static void Main(string[] args) { int x = 0; int div = 0;
24
try { div = 100/x; Console.WriteLine("Not executed line"); } catch(DivideByZeroException de) { Console.WriteLine("Exception occured"); } finally { Console.WriteLine("Finally Block"); } Console.WriteLine("Result is {0}", div); Console.ReadLine(); } } } Output: Exception occurred Finally Block Result is {0}
ASP.NET is a part of the Microsoft .NET framework, and a powerful tool for creating dynamic and interactive web pages. ASP stands for Active Server Pages. It is a Microsoft server-side Web technology. It was first released in January 2002 with version 1.0 of the .NET Framework, and is the successor to Microsoft's Active Server Pages (ASP) technology. ASP.NET is built on the Common Language Run time (CLR), allowing programmers to write ASP.NET code using any supported .NET Language.
ASP.NET takes an object-oriented programming approach to Web page execution. Every element in 25
an ASP.NET page is treated as an object and run on the server. An ASP.NET page gets compiled into an intermediate language by a .NET Common Language Run time compiler.. Asp.Net enables you to access information from data sources such as back-end databases and textfiles that are stored on a web server or a computer that is accessible to a web server.Asp.Net enables you to separate HTML design from the data retrieval mechanism. As a result, changing the HTML design does not affect the programs that retrieve data from the database.
Some Important Features Better language support ASP.NET uses ADO.NET. ASP.NET supports full Visual Basic, not VBScript. ASP.NET supports C# (C sharp) and C++. ASP.NET supports JScript.-driven programming All ASP.NET objects on a Web page can expose events that can be processed by ASP.NET code. Load, Click and Change events handled by code makes coding much simpler and much better organized. XML-based components ASP.NET components are heavily based on XML. Like the new AD Rotator, that uses XML to store advertisement information and configuration. User authentication, with accounts and roles ASP.NET supports form-based user authentication, cookie management, and automatic redirecting of unauthorized logins. ASP.NET allows user accounts and roles, to give each user (with a given role) access to different server code and executables. Higher scalability 26
Much has been done with ASP.NET to provide greater scalability. Server-toserver. The result of this is greatly increased performance. Easier configuration and deployment Configuration of ASP.NET is done with plain text files. Configuration files can be uploaded or changed while the application is running. No need to restart the server. No more metabase or registry puzzle. Not fully ASP compatible.
Software Requirements
27
Visual studio2008The Web Server's and Web Browser's Role The Web server is responsible for accepting request for a resource and sending the appropriate response.
The web browser is responsible for displaying data to the user, collecting data from the user and sending data to the web server. HTTP is a text-based communication protocol that is used to communicate between web browsers and web servers using port 80.
Each HTTP command contains a method that indicates the desired action. Common methods are GET and POST.
Sending data to the web server from the browser is commonly referred to as a post-back inASP.NET programming
.13TOP) 28
Server technologies and client technologies: ASP .NET (Active Server Pages) Windows Forms (Windows desktop solutions) Compact Framework (PDA / Mobile solutions) Development environments: Visual Studio .NET (VS .NET) Visual Web DeveloperThe .NET Framework is a software framework for Microsoft Windows operating systems. It includes a large library, and it supports several programming languages which allow language interoperability (each language can use code written in other languages). The .NET library is available to all the programming languages that .NET supports.
29
CTSCommon Type System (CTS) describes how types are declared, used and managed. CTS facilitates Cross-language integration, type safety, and high performance code execution.
CLSThe CLS is a specification that defines the rules to support language integration. This is done in such a way, that programs written in any language (.NET compliant) can interoperate with one another. This also can take full advantage of inheritance, polymorphism, exceptions, and other features.
30
Built-in BrowserThe IDE comes with a built-in browser that helps you browse the Internet without launching another application. You can look for additional resources, online help files, source codes and much more with this built-in browser feature.
WEBPAGESASP.NET web pages, known officially as "web forms", are the main building block for application development. Web forms are contained in files with an ".aspx" extension; these files typically contain static
(X) HTML markup, as well as markup defining server-side Web Controls and User Controls where thedevelopers place all the required static and dynamic content for the web page. An ASP.NET Web page
consists of two parts: Visual elements, which include markup, server controls, and static text. Programming logic for the page, which includes event handlers and other code. ASP.NET provides two models for managing the visual elements and code the single-file page 31
model and the code-behind page model. Single-file Page Model In the single-file page model, the page's markup and its programming code are in the same physical .aspx file. Code behind Model Page Model Microsoft recommends dealing with dynamic program code by using the code-behind model, which places this code in a separate file or in a specially designated script tag. The codebehind page model allows you to keep the markup in one filethe .aspx fileand the programming code in another file. The name of the code file varies according to what programming language you are using. Code-behind files typically have names like MyPage.aspx.cs or MyPage.aspx.vb while the page file is MyPage.aspx (same file name as the page file (ASPX), but with the final extension denoting the page language)..
Directives A directive is special instructions on how ASP.NET should process the page. Each directive can contain one or more attributes (paired with values) that are specific to that directive. The directives section is one of the most important parts of an ASP.NET page. Directives control how a page is compiled, specify settings when navigating between pages, aid in debugging (errorfixing), and allow you to import classes to use within your pages code. Directives start with the sequence <%@, followed by the directive name, plus any attributes and their corresponding values, then end with %>. The most common directive is <%@ Page %> which can specify many things, such as 32
which programming language is used for the server-side code. Page directive is used at the top of the page like: <%@ Page Language="VB" %> <BR> <%@ Page Language="C#" %> The Page directive, in this case, specifies the language thats to be used for the application logic by setting the Language attribute appropriately. The value provided for this attribute, in quotes, specifies that were using either VB.NET or C#.
NamespacesNamespaces are a way to define the classes and other types of information into one hierarchical structure. System is the basic namespace used by every .NET code. If we can explore the System namespace little bit, we can see it has lot of namespace user the system namespace. For example, System.Io, System.Net, System.Collections, System.Threading, etc. Let us see the content of a few Namespaces: System: Contains classes for implementing base data types. System.Collection: Contains classes for working with standard collection type, such as hash table and array list. System.Componentmodel: Contains classes for implementing both the design-time and run-time behavior of components and controls. System. Data: Contains classes for implementing ADO.NET architecture. System. Web: Contains classes and interfaces that enable browser/server communication. System.web.sessionstate:
Contains classes for implementing session states. System.Web.UI Contains classes that are used in building the user interfaces of the ASP.NET page. System.Web.UI.WebControls Contains classes for providing web controls.
33
System.Web.UI.HtmlControls Contains classes for providing HTML controls. Life Cycle Events of an ASP.NET Page PreInit This is the first real event you might handle for a page. You typically use this event only if you need to dynamically set values such as master page or theme. Init This event fires after each control has been initialized. Init complete Raised once all initialization of the page and its controls have been completed. Preload This event fires before view state has been loaded for the page and its controls and before post back processing. Load The page is stable at this time Control (PostBack) event(s) ASP.Net now calls any events on the page or its controls that caused the PostBack to occur. LoadComplete At this point all controls are loaded PreRender Allows final changes to the page or its control. SaveStateComplete Prior to this event the view state for the page and its control is set. Render At this point ASP.NET call this method on each of the pages controls to get its output. Unload This event is used for cleanup code.
34
Go to File Menu and Click on New Web Site ... as displayed in the picture below.
A Dialogue box will appear asking for Template, Location, Language. Select/write the location in the location TextBox, Select ASP.NET Web Site as Tempate and Select Visual C# as language as displayed in the picture below. Click OK.
35
This will create a default.aspx, default.aspx.cs (it is called as codebehind file) file, web.cofig file and an App_Data (It is used to place your database like MS Access or SQL Express or xml files ) folder by default as displayed in the picture below.
Start writing few codes as displayed in the picture below. You can notice that as soon as you started writing code, Visual Web Develope will give intellisense (dropdown) to complete the code of the control as shown in the picture below.
36
Write a Label control and a Button control as displayed in the picture below. You can notice that in the OnClick property of the Button control I have specified the name of the method that will fire when the button will be clicked. I will write this method in the Codebehind file.
Now press F7 key from the Keyboard and you will jump to the Codebehind file named default.aspx.cs. Alternatively you can click + symbol beside the default.aspx file and double click default.aspx.cs. Write few lines of code into it as displayed in the picture below. In this file Page_Load method will be automatically written. I am writing SubmitMe method that will fire when Button will be clicked as stated above. Now Save all files by clicking on the Save All toolbar or by going through File->Save All.
37
Now hit F5 key from your keyboard and your application will start in the debug mode. Alternatively you can click on the Play button from the toolbar. This will ask you to modify the web.cofig file to start the application in the debug mode like picture below. Click OK button. Try clicking the Submit button and you will see that a message "Hello World" appears in the browser just beside the Button like below picture.
38
The default folders present in the newly created web site App_Code Folder As its name suggests the App_Code Folder stores classes, typed data sets, etc. All the items that are stored in App_Code are automatically accessible throughout the application. App_Data Folder The App_Data folder is used as storage for the web application. It can store files such as .mdf, .mdb, and XML. It manages all of your application's data centrally. It is accessible from anywhere in your web application. App_Theme Folder If you want to give your web sites a consistent look, then you need to design themes for your web application. The App_Themes folder contains all such themes. App_Browser Folder The App_Browser folder contains browser information files (.browser files). These files are XML based files which are used to identify the browser and browser capabilities. App_LocalResource Folder Local resources are specific to a single web page, and should be used for providing multilingual functionality on a web page. App_GlobalResource Folder The App_GlobalResource folder can be read from any page or code that is anywhere in the web site. Bin Folder Contains compiled assemblies for code that you want to reference in your application.
The .NET Framework relies on .config files to define configuration options. The .config files are text-based XML files. Multiple .config files can, and typically do, exist on a single system.
Web. Config file is used to control the behavior of individual ASP.NET applications. The .config files are text-based XML files .NET applications on the whole system.
39> <mailSettings> <smtp>
40
<network host="Host name" port="25" userName="Username" password="password" /> </smtp> </mailSettings> </system.net>
How to set the Image path when uploading images to the web..<appSettings> <add key="ItemImagePath" value="Specify the path to save images"/> </appSettings>
41
Chapter IIIServer Controls and Customizing a Web Application
HTML Server controls HTML server controls are HTML tags understood by the server. An Html server controls allows us to define actual HTML inside our ASP.Net page but work with the control on the server through an object model provided by the .net Framework.Creating HTML server control in the designer: Open an ASP.NET page inside the designer In the Toolbox , click the HTML tab to expose the html controls
42
Asp.Net provides programmers a set of web server control for creating web pages that provide more functionality and a moe consistent programming model. Adding web server control using Design View Open the toolbox and double click the required control ( OR ) Drag a webserver control from the toolbox and drop it on the web page either on the design or source view.
43
The bordercolor of the control which can be set using standard HTML color identifiers such as "black" or "red" Border Width The width of the controls border in pixels Border Style The border style, If there is any possible values are not set, none, dotted and outset CSS Class The CSS class to assign to the content. Style A list of all CSS properties that are applied to the specified HTML server control
Enabled An attributes that disables the control when set to false. Enable Theming This enables themes for this control Font You can make a web server controls text italic by including the Font-Italic attributes in its opening tag. Fore Color The foreground color of the control Height It controls the height. Skin ID The skin to apply to the control Tab Index The controls position in the tab order ToolTip The text that appears when the user hovers the mouse pointer over a control. Width The width of the control. The possible units are pixels, pointpich and inch etc..
44
The TextBox control is used to collect information from a user.The TextBox control contains a TextMode property that you can set to single line,multi line or password. The single line value allows the user to enter a single line of text. The multiline value indicates taht a user is able to enter many lines of Text. Password value creates a single text box that makes the values entered by the user as they are entered MaxLength specifies the maximum number of characters the user can enter into the control. ReadOnly determines whether the data in the TextBox can be edited. <asp:textbox id=TextBox1 </asp:textbox> The Button control
The Buton control displays a buttton on the web page that a user can click to trigger a postback to the web server.The Button control can be rendered as a submit button or a command Button. A submit button performs a postback to the server. You provide an event handler for the button's click event to control the actions performed when the user clicks the server button A button control can also be used as a command button, Which is one of a set of button that works together as a group such as a toolbar. You define a button as a command button by assisgning a value to its command name property. Checkbox Control The checkbox control gives the user the ability to select between true and false. The checkbox controls text property specifies its caption.The ChekedChanged event is raised when the the state of the CheckBox control changes.
45
Property: Text This property is used to get or set the text of the checkbox control. Items This property is used to access the individual check boxes in the checkboxlist control. RadioButton Control The radiobutton control gives the user the ability to select between mutually exclusive.Radio Button controls in a group. To group multiple radiobutton controls together,specify the same Group Name for each radiobutton controls in a group.ASP.net ensures that selected radio button is mutually exclusive within the group. Text: This property is used to get or set the text of the radio button control.
Items
This property is used to access the individual radio button in the radio button list control.
The Literal Control The Literal control is similar to the Label control in that both control used to display static text in a web page.The Literal control does not provide substantial functionality and does not add any HTML elements to thepage,whereas the Label is rendered as a <span> tag.This means that the literal control does not have a style property and therefore you cannot apply style to its content. . The Literal control is useful when you need to add text to the output of the page dynamically(from the server)but do not want to use a label. The Literal control's mode property which is used to specify particular handling of the content of the text property. Mode PassThrough: The text content is rendered as is. Encode: The text content is HTML-encoded.
46
Transform: The Text content is converted to match the markup language of the requesting browser, such as HTML and XHTML. Example: <html xmlns=""> <head runat="server"> <title>Untitled Page</title> </head> <body> <form id="form1" runat="server"> <div> <asp:Literal</asp:Literal><br /> <asp:Literal</asp:Literal><br /> <asp:Literal</asp:Literal> </div> </form> </body> </html> protected void Page_Load(object sender, EventArgs e) { Literal1.Text=@"This is an<font size=4>example</font><script>alert(""Hi"");</script>";
Literal2.Text =@"This is an<font size=2>example</font><script>alert(""Hi"");</script>"; Literal3.Text =@"This is an<font size=7>example</font><script>alert(""Hi"");</script>"; Literal1.Mode = LiteralMode.PassThrough; Literal2.Mode = LiteralMode.Encode; Literal3.Mode = LiteralMode.Transform; } Output
example47
Tables are useful for tabular data, but they can also help with the layout of graphics and controls on a form.The concept of column and rows is a powerful technique for web pages. HTML provides the <table> tag for defining a table, the <tr> tag for creating a row, and the<td> tag for defining a column in the row. Adding Rows and Cells Dynamically to a table control: 1.Open a Visual Studio Web Site or create a new one.Open a web page with which to work. 2.From the Toolbox, drag a Table control onto your page. 3.Open the code_behind file for the given page and add a PreInit event to the page. 4. Inside the PreInit event write a for loop to create five new rows in table. 5.Inside this loop, add another for loop to create three columns for each row. 6.Inside this loop,modify the TableCell.Text property to identify the row and column. protected void Page_PreInit(object sender, EventArgs e) { Table1.BorderWidth = 1; for (int row = 0; row < 5; row++) { TableRow tr = new TableRow(); for (int column = 0; column < 3; column++) { TableCell tc = new TableCell(); tc.Text = string.Format("Row:{0} Cell:{1}", row, column); tc.BorderWidth = 1; tr.Cells.Add(tc); } Table1.Rows.Add(tr); } } Output
The Image Control: The image control can be used to display an image on a web page.The Image control inherits directly from the WebControl class.The ImageMap and Image-Button controls inherits directly from the Image control. The image control is represented as the<asp:image> element in the source and has no content embedded between its opening and closing tag. Image control's primary property ImageUrl, indicates the path to the image that is downloaded from the browser and displayed on the page.
AlternateText property of image control is used to dispaly a text message in the user's browser when the image is not available. The DescriptionUrl properrty is used to provide further explanation of the content and meaning of the image when using non visual page renders. Setting the GenerateEmptyAlternateText property to True will add the attribute alt="" to the<img> element that the image control generates. The ImageButton Control The Image control does not have a click event.To make use of click event when it is needed then we can go for ImageButton control. Working with Calendar Controls The calendar control allows you to display a calendar on a web page. The calendar can be used when asking a user to select a given date or series of dates. Users can navigate between years,months and days. Property: Caption: The Text that is rendered in the Calendar. DayNameFormat: The Format for the names of the days of the week
49
Selected Date: The Date selected by the user. Selection Mode: A value that indicates how many dates can be selected.It can be a Day,DayWeekMonth or name. Todyas Date: Display Todays Date.. The FileUpload control The FileUpload control is used to allow a user to select and upload a single file to the server.The control displays as a text box and a browse button. Properties: FileBytes: The file is exposed as a byte array. FileContent The file is exposed as a stream. PostedFile The file is exposed as an object of type HttpPostedFile.This object has properties such as content type and content length. The Panel control: The panel control is used as a control container.It can be useful when you need to group controls and work with them as a single unit. The BackIMageUrl property can be used to display a background image in the panel control. The Wrap property specifies whether items in the panel automatically continue in the next line when a line is longer than the width of the panel control. The default button property can be set to the ID of any control on your form that implements the IButtonControl Interface. The MultiView and View Controls: Like the Panel control, the multiview and view controls are also continer controls; that is thay are used to group other controls.Amultiview exists to contain other view controls.A view control must be contained inside a multiview.
You can use the ActiveViewIndex property or the seactiveview method to chage the view programmatically.If the Activeviewindex is set to -1,no view controls are displayed.
50
Consider a user registration web page where you need need to walk a user through the process of registering with your siteYou could use a single multiview control and three view control to manage this process To manage the page in this example, the button the page are set to command buttons When a user clicks a button,the commandname property of commandeventargs is checked to determine the button pressed.Based on this information,the mutiview shows another view control.The following is an example of the code_behind page:
51
{ MultiView1.SetActiveView(View3); if (String.IsNullOrEmpty(fld_Name.Text)) { lit_Name.Text = "You did not enter your name. "; } else { lit_Name.Text = "Hi, " + fld_Name.Text + ". "; } } else if (MultiView1.ActiveViewIndex == 2) { MultiView1.SetActiveView(View1); } }
The Wizard Control The wizard control is used to display a series of wizardstep controls to a user one after the other as part of a user input process.The wizard control builds on the mutiview and view controls presented previously.
Validation control: A Validation server control is used to validate the data of an input control. If the data does not pass validation, it will display an error message to the user. The syntax for creating a Validation server control is:
52
ControlToValidate: This property is used to specify the ID of the control to be validated. ErrorMessage: This property is used to specify the error message dispalyed when the validation condition fails. Text This property is used to specify the error message dispalyed by the control.
53
<?xml version="1.0" encoding="utf-8" ?> <Advertisements> <Ad> <ImageUrl>images\koala.jpg</ImageUrl> <NavigateUrl></NavigateUrl> <AlternateText>ASP.NET Logo</AlternateText> <Keyword></Keyword> <Impressions>1</Impressions> <Caption>This is the caption for Ad#1</Caption> </Ad> <Ad> <ImageUrl>images\penguins.jpg</ImageUrl> <NavigateUrl></NavigateUrl> <AlternateText></AlternateText> <Keyword></Keyword> <Impressions>1</Impressions> <Caption>This is the caption for Ad#2</Caption> </Ad> </Advertisements>
Create the application</asp:AdRotator> 4. Click AdRotator1 (the newly added AdRotator control), and then press the F4 key to view its properties. 5. Set the AdvertisementFile property to myads.xml. 6. Save and build the project <html xmlns=""> <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div> <asp:AdRotator </div> </form> </body> </html>
54
.15 Creating Master pages to establish a common layout for a web application.Advantages of Master Page: They allow you to centralize the common functionality of your pages so that you can make updates in just one place They make it esay to create one set of controls and code and apply the result to a set of pages @" %>. Create a master page using the Add New Item dialogue box, and then by selecting Master Page from the list.
55
AutoEventWireup="true"
CodeFile="MasterPage.master.cs"
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns="" > <head id="Head1" runat="server"> <title>Master Page</title> </head> <body> <form id="form1" runat="server"> <table bgcolor="#cccc99" id="header" style="WIDTH: 100%; HEIGHT: 80px" cellspacing="1" cellpadding="1" border="1"> <tr> <td width="100%" style="TEXT-ALIGN: center"> This is the header from the Master Page </td> </tr> </table> <b/> <table bgcolor="#99cc99" id="leftNav" style="WIDTH: 108px; HEIGHT: 100%" cellspacing="1" cellpadding="1" border="1"> <tr> <td style="WIDTH: 100px"> Left Navigation </td> 56
</tr> </table> <table bgcolor="#ccff99" id="mainBody" style="LEFT: 120px; VERTICALALIGN: top; WIDTH: 848px; POSITION: absolute; TOP: 94px; HEIGHT: 100%" border="1"> <tr> <td width="100%" style="VERTICAL-ALIGN: top"> <asp:contentplaceholder</asp:contentplaceholder> <. In the content page, you create the content by adding Content controls and mapping them to ContentPlaceHolder controls on the master page.
Create a content page by selecting a web form from the Add New Item dialog box and select the Select master page check box at the bottom of the page as below
Clicking Add in the above dialog box will result in the Master Picker dialog box
57
being displayed. As we can see from the below screenshot, the master selection dialog box displays all the master pages available in the Web application.
58
ThemesIn ASP.Net theme is a collection of styles,property settings and graphics that define the appearance of pages and controls on your web site. Themes save your time and improve the consistency of a site by applying a common set of control properties,styles and graphics across all pages in a web site. Themes can be centralized allowing you to quickly change the appearance of all the controls on your site from a single file. ASP.NET themes consist of skin files, css, images and other resources. Right-click on the project in the Solutions Explorer, select Add ASP.NET Folder, and then
59
choose Theme from the submenu Theme1 will be added to the solution explorer, rename this folder as ControlsTheme
Right-click on the ControlsTheme folder we just created, select Add New Item, we will getthis
6. Again right click ControlsTheme folder and select Add New Item 7. Select the Skin File option. 8. Name the file as Button.skin and enter<asp:button
60
<br/><br/> <asp:labelThis Label inherits the style specified in the Label.skin file</asp:label> </form> </body> </html>
External Style Sheet: In the external style sheet we need to externally create the CSS file. This particular style sheet can be made use by all the web pages. Click on website->add new item->style sheet->add
Now type the below mentioned code in the style sheet file: body { color:Blue; background-color:Lime; } In the source file inside the head tag type the below mentioned code: <head runat="server">
61
<link rel="Stylesheet" href="StyleSheet.css" /> <title></title> </head> Internal Style Sheet: In the Internal style sheet, the style sheet file is applicable only to the current application.Other applications cannot share that file. Type the below code in the source file: <head runat="server"> <style type="text/css"> body { color:Blue; background-color:Lime;
62
:
63
ASP.NET Web Parts controls are an integrated set of controls for creating Web sites that enable end users to modify the content, appearance, and behavior of Web pages directly in a browser. These controls and classes can be found inside the name space " System.Web.UI.WebControls.WebParts". WebPartManager: The webpartmanager control is required on every page that include web parts. It does not have a visual representation. Rather, it manages all the Web Part controls and their events on the given page. WebPart: The WebPart class represents the base class for all web parts that you develop. It provides UI,personalization and connection features. Catalog Part: The Catalog Part provides the UI for managing a group of web parts that can be added to a web part page. This group is typically sitewide. PageCatalogPart: The pagecatalogpart is similar to the catalogpart. However, it only groups those web parts that are part of the given page. In this way if a user closes certain web parts on the page, he or she can use the pagecatalogpart to read them to the page. EditorPart: The EditorPart control allow users to define customizations for the given web part, such as modifying property settings. DeclarativeCatalogPart DeclarativeCatalogPart allows you to declare web parts that should be available to add to a page or the entire site. WebPartZone
64
The webpartzone control is used to define an area on your page in which web parts can be hosted. EditorZone: The EditorZone control provides an area on the page where EditorPart controls can exist. CatalogZone: The CatalogZone control defines an area in which a catalog part control can exist on the page. To place web parts inside a zone, you have to add the ZoneTemplate control inside the webpartzone. The Zone template control lets you add other ASP.NET controls to the zone and have them turned into actual web parts. The following markup shows an example: Namespace: using System.Web.UI.WebControls.WebPart Source File: <body> <form id="form1" runat="server"> <asp:WebPartManager </asp:WebPartManager> <div> <table> <tr> <td> <asp:WebPartZone <ZoneTemplate> <asp:Label <a href="">Google site</a> <br /> </asp:Label> </ZoneTemplate> </asp:WebPartZone> </td> <td> <asp:WebPartZone <ZoneTemplate> <asp:Label</asp:Label> </ZoneTemplate> </asp:WebPartZone>
65
</td> <td> > </tr> </table> </div> </form> </body> </html> Output
There are many ways to navigate from one page to another in ASP.NET. Client-side navigation Cross-page posting
66
Client-side NavigationClient-side code or markup allows a user to request a new web page. It requests a new Web page in response to a client-side event, such as clicking a hyperlink or executing JavaScript as part of a buttonclick. HyperLink Control : Source <asp:HyperLink ID=HyperLink1 runat=server NavigateUrl=~/NavigateTest2.aspx>Goto NavigateTest2</asp:HyperLink> HyperLink Control : Rendered HTML <a id=HyperLink1 href=NavigateTest2.aspx> Goto NavigateTest2</a> In this example, if this control is placed on a web page called NavigateTest1.aspx, and the HyperLink Control is clicked, the browser simply requests the NavigateTest2.aspx page. HTML button element with client-side JavaScript <input id=Button1 Button1_onclick() /> type=button value=Goto NavigateTest2 onclick=return
The JavaScript source for the Button1_onclick method is added into the <head> element of the page <script language=javascript type=text/javascript> function Button1_onclick { document.location = NavigateTest2.aspx; } </script>
Cross-Page PostingA control and form are configured to PostBack to a different Web page than the one that made the original request. This navigation is frequently desired in a scenario where data is collected on one web page and processed on another web page that displays the results. Here a Button control has its PostBackUrl property set to the web page to which the processing should post back.The page that receives the PostBack receives the posted data from the first page for processing. Example protected void Page_Load(object sender, EventArgs e) { if(Page.Previous == null) { Label1Data.Text = No previous page in post ; } else
67
{ Label1Data.Text = ( (TextBox)PreviousPage.FindControl(TextBox1) ).Text; } } Here a web page called DataCollection.aspx contains a TextBox control called TextBox1 and a Button control that has its PostBackUrl set to ~/ProcessingPage.aspx. When ProcessingPage is posted by the data collection page, it executes server-side code to pull the data from the data collection page and put it inside a Label control.
Server-Side TransferServer-side code transfers control of a request to a different Web page. This method transfers the entire context of a Web page over to another page. The page that receives the transfer generates the response back to the user's browser. Example protected void ButtonSubmit_Click(object sender, EventArgs e) { Server.Transfer(OrderProcessing.aspx); }
68
CookieA cookie is a small piece of text stored on user's computer. Usually, information is stored as name-value pairs. Cookies are used by websites to keep track of visitors. Every time a user visits a website, cookies are retrieved from user machine and help identify the user. Example Use of cookies to customize web page :
if (Request.Cookies["UserId"] != null)lbMessage.text = "Dear" + Request.Cookies["UserId"].Value + ", Welcome to our website!"; else lbMessage.text = "Guest,welcome to our website!"; If we want to store client's information use the below code Response.Cookies["UserId"].Value=username;
Hidden fieldsHidden. Hidden field values can be intercepted(clearly visible) when passed over a network: View State can be used to store state information for a single user. View State is a built in feature in web controls to persist data between page post backs. We can set View State on/off for each control using EnableViewState property. By default, EnableViewState property will be set to true.View state information of all the controls on the page will be submitted to server on each post back.We can also disable View State for the entire page by adding EnableViewState=false to @page directive. View State can be used using following syntax in an ASP.NET web page. // Add item to ViewState ViewState["myviewstate"] = myValue; //Reading items from ViewState Response.Write(ViewState["myviewstate"]);
69
Example protected void Page_Load(object sender, EventArgs e) { //First we check if the page is loaded for the first time, if (!IsPostBack) { ViewState["PostBackCounter"] = 0; } //Now, we assign the variable value in the text box int counter = (int)ViewState["PostBackCounter"]; Label1.Text = counter.ToString(); } protected void Button1_Click(object sender, EventArgs e) { int oldCount = (int)ViewState["PostBackCounter"]; int newCount = oldCount + 1; //First, we assign this new value to the label Label1.Text = newCount.ToString(); //Secondly, we replace the old value of the view state so that the new value is read the next //time ViewState["PostBackCounter"] = newCount; } Source Code: <asp:Label</asp:Label> <asp:Button
Query strings:Query strings are usually used to send information from one page to another page. They are passed along with URL in clear text. Source File: <asp:TextBox</asp:TextBox> <asp:Button Code File: protected void Page_Load(object sender, EventArgs e) { TextBox1.Text = Request.QueryString["aa"]; } protected void Button1_Click(object sender, EventArgs e) { Response.Redirect("Default2.aspx?aa=" + TextBox1.Text);
70.
Session object:Session object is used to store state specific information per client basis. It is specific to particular user. Session data persists for the duration of user session you can store session's data on web server in different ways. Session state can be configured using the <session State> section in the application's web.config file.> Mode: This setting supports three options. They are InProc, SQLServer, and State Server Cookie less: This setting takes a Boolean value of either true or false to indicate whether the Session is a cookie less one. Timeout: This indicates the Session timeout vale in minutes. This is the duration for which a user's session is active. Note that the session timeout is a sliding value; Default session timeout value is 20 minutes SqlConnectionString: This identifies the database connection string that names the database used for mode SQLServer.
71
Server: In the out-of-process mode State Server, it names the server that is running the required Windows NT service: aspnet_state. Port: This identifies the port number that corresponds to the server setting for mode State Server. Note that a port is an unsigned integer that uniquely identifies a process running over a network. We can disable session for a page using EnableSessionState attribute. We can set off session for entire application by setting mode=off in web.config file to reduce overhead for the entire application. ASP.NET Session state provides a place to store values that will persist across page requests. Values stored in Session are stored on the server and will remain in memory until they are explicitly removed or until the Session expires. protected void Page_Load(object sender, EventArgs e) { if (IsPostBack == false) { Session["aa"] = 0; } } protected void Button2_Click(object sender, EventArgs e) { TextBox1.Text = Convert.ToString(Session["aa"]); Response.Redirect("Default2.aspx"); } In the source code type the below code: <asp:Button <asp:TextBox</asp:TextBox>
SecurityThere are two concepts at security for distributed applications - authentication and authorization. Authentication is the process of obtaining some sort of credentials from the users and using thosecredentials to verify the user's identity. Authorization is the process of allowing an authenticated user access to resources.Authentication is always precedes to Authorization; even if your application lets anonymous users connect and use the application, it still authenticates them as being anonymous. Authentication providers ASP.net gives a choice of three different authentication providers. The windows Authentication provider lets you authenticates users based on their windows accounts. This provider uses IIS to perform the authentication and then passes the authenticated identity to
72.
Form Authentication Forms form that contains the credentials or a key for reacquiring the identity. Subsequent requests are issued with the form in the request headers. They are authenticated and authorized by an ASP.NET handler using whatever validation method the application specifies. Try the below steps to implement Form Authentication In the web.config file type the below mentioned code <authentication mode = "Forms"> <forms loginUrl = "/Default4.aspx"> <credentials passwordFormat="Clear"> <user name = "Mary" password = "123" /> </credentials> </forms> </authentication> In the source file type the below mentioned code: <head runat="server"> <title></title> <script language = "C#" runat="server"> void Button1_Click (Object sender, EventArgs E) { if (FormsAuthentication.Authenticate (TextBox1.Text, TextBox2.Text))
73
<form id="form1" runat="server"> <div> <asp:TextBox</asp:TextBox> <asp:TextBox </asp:TextBox> <asp:Button </div> </form> Windows-Based Authentication. Creating Users Within your Windows XP or Windows Server 2003 server, choose Start-->Control Panel->Administrative Tools-->Computer Management. If you are using Windows Vista, choose Start-->Control Panel-->System and Maintenance-->Administrative Tools-->Computer Management. Either one opens the Computer Management utility. It manages and controls resources on the local Web server. You can accomplish many things using this utility, but the focus here is on the creation of users. Expand the System Tools node. Expand the Local Users and Groups node. Select the Users folder. You see something similar to the results shown in Fig 2.
Right-click the Users folder and select New User. The New User dialog appears, as shown in Figure 3.
74
Give the user a name, password, and description stating that this is a test user. In this example, the user is called Bubbles. Clear the check box that requires the user to change his password at the next login. Click the Create button. Your test user is created and presented in the Users folder of the Computer Management utility, as shown
Now create a page to work with this user. Add the section presented in Listing 1 to your web.config file. Listing 1: Denying all users through the web.config file <system.web> <authentication mode="Windows" /> <authorization> <deny users="*" /> </authorization> </system.web> Passport Authentication: .NET Passport allows users to create a single sign-in name and password to access any site that has implemented the Passport single sign-in (SSI) service. By implementing the Passport SSI, you won't have to implement your own user-authentication mechanism. Users authenticate with the SSI, which passes their identities to your site securely. Although Passport authenticates users, it doesn't grant or deny access to individual sites i.e. .NET Passport does only authentication not authroziation .
75
Passport simply tells a participating site who the user is. Each site must implement its own accesscontrol mechanisms based on the user's Passport User ID (PUID). To Implement .NET Passport Authentication We need to download the Microsoft .NET Passport SDK from the Microsoft Site and install it in our web server.
Chapter IV
76
Windows-based applications can make calls to methods exposed through XML Web services
77
and navigation, Text editing, Information display (read-only), Web page display, Selection from a list, Graphics display, Value setting, Date setting, Dialog boxes. Some important controls which we use often in windows forms are: Combo Box: In the Combo Box we can add items to the list. In the item properties we need to add the elements to the list.From the collections we can choose one. DateTime Picker: We can add the Date and Time to the the windows form by means of DateTime Picker Control. By means of the value property we can change the date and time of the control. Data GridView: By means of the GridView we can bind the value from the DataBase to the DataGridView control. When we click the small icon near by the GridView control we can find choose the DataSource option. After that we need to give the Database,DataSource and the connection name. Data will then bind to the GridView control. ListBox: In the ListBox control we can add the items to the list. Its like grouping of similar items together. In the ListBox control when we click the small icon it will display Edit Item option. By means of
Edit Item Option we can add the elements to the list. ContextMenuStrip: In the ContextMenuStrip we can create menu. And also we can list sub elements inside the menu. It reduces the task of creating menu. GroupBox: By means of the Group Box control we can place other controls inside the Group Box control. Panel: By means of the Panel control we can place other controls inside the Panel control. The main difference between panel and group box control is Group Box provides a caption and border around a group of controls. Tree View: By means of tree view control. We can add items to the tree view control in a hierarchical order. ListView: Displays items in one of four different views. Views include text only, text with small icons, text with large icons, and a details view.
78
.20
Crystal Report
Crystal Reports is an integral part of Visual Studio .NET and ships as a part of it. By making Crystal Reports a part of Visual Studio .NET suite, Microsoft has added one more useful tool to the Visual Studio family..Crystal Reports allows users to graphically design data connections and report layout. In the Database Expert, users can select and link tables from a wide variety of data sources, including Excel spreadsheets, oracle databases, and local file system information. Fields from these tables can be placed on the report design surface, and can also be used in custom formulas, using Crystal's own syntax, which are then placed on the design surface. Both fields and formulae have a wide array of formatting options available, which can be applied absolutely or conditionally. The data can be grouped into bands, each of which can be split further and conditionally suppressed as needed. Crystal Reports also supports sub reports, graphing, and a limited amount of GIS functionality.
Advantages of Crystal Reports Some of the major advantages of using Crystal Reports are:.
79
Implementation ModelsCrystal Reports need database drivers to connect to the data source for accessing data. Crystal Reports in .net support two methods to access data from a data source:
Strongly-typed ReportWhen you add a report file into the project, it becomes a "strongly-typed" report. In this case, you will have the advantage of directly creating an instance of the report object, which could reduce a few lines of code, and cache it to improve performance. The related .vb file, which is hidden, can be viewed using the editor's "show all files" icon in the Solution Explorer.
Un-Typed ReportThose reports that are not included into the project are "un-typed" reports. In this case, you will have to create an instance of the Crystal Report Engine's "ReportDocument" object and manually load the report into it.
81
Kavi
Priya
India
82
.21TOP
Components Controls. Microsoft provides COM interfaces for many Windows application programming interfaces such as Direct Show, Media Foundation, Packaging API, Windows Animation Manager, Windows Portable Devices, and Microsoft Active Directory (AD)..How Component can Improve Application Performance? CLR is a place where .Net Application codes get compiled. Whenever you made a request to ASP.NET page, the page gets compiled first and then is transferred to the user. The compilation process is also completed in two steps.
An IL (Intermediate Language) is generated and then this is handed over for JIT JIT (Just In Time) compilation which produces the machine code and our pages get displayed withtheir dynamic content. The performance of our web application can be improved by creating pre-compiled libraries of IL which can then be handed over to JIT directly without having the inclusion of an extra process to make the conversion. This method is called componentization. Components are pre-compiled set of classes that have been developed in the form of DLL files and can then be included within our projects. This is also known as an assembly. Advantages of Component Improving Application Performance
83
Example
1. Open the website 2. Click on Project Menu->Add Component --->Component Class 3. In the code file type the below code and debug the file.
using System; using System.Diagnostics; using System.ComponentModel; namespace InstConLib { /* InstanceControlLib Class controls the instance */ public class InstanceControl : Component { private string stProcName = null; /*constructor which holds the application name*/ public InstanceControl(string ProcName) { stProcName = ProcName; } public bool IsAnyInstanceExist() { /*process class GetProcessesByName() checks for particular process is currently running and returns array of processes with that name*/ Process[] processes = Process.GetProcessesByName(stProcName); if (processes.Length != 1) return false; /*false no instance exist*/ else return true; /*true mean instance exist*/ } } /*end of class*/ }/*end of namespace*/
4. Create another website and a console application 5. In the Program.cs file ( InstClient.cs ) enter the following code
84
class Program { static void Main(string[] args) { //First Object which looks for testApp.exe process //remember its not neccessary to give extention of
process
InstConLib.InstanceControl in1 = new InstConLib.InstanceControl("testApp"); if (in1.IsAnyInstanceExist()) Console.WriteLine("Already one instance is running"); else Console.WriteLine("No Instance running"); //Second Object which looks for Explorer.exe process //remember its not neccessary to give extention of process InstConLib.InstanceControl in2 = new InstConLib.InstanceControl("Explorer"); if (in2.IsAnyInstanceExist()) Console.WriteLine("Already one instance is running"); else Console.WriteLine("No Instance running"); Console.ReadLine(); } } }
85
Throughout the .NET Framework, many classes support using the APM by providing BeginXXX and EndXXX versions of methods. For example, the FileStream class that is defined in the System.IO namespace has a Read method that reads data from a stream. To support the APM model, it also supports BeginRead and EndRead methods. This pattern of using BeginXXX and EndReadXXX methods allows you to execute methods asynchronously. The real-world programming strength of the APM is that a single pattern can be used to asynchronously execute both compute-bound and I/O bound operations. This article purposes to describe the three "rendezvous" techniques that comprise the APM. This chapter will start by examining how to use the APM to perform an asynchronous computebound operation. The code given below is an example of the pattern of using BeginXXX and EndXXX methods to allow you to execute code asynchronously: To explain, the purpose of the example code above was to read some bytes (100) from a file stream asynchronously using the APM. First we have to construct a FileStream object by calling one of its constructors that accepts the System.IO.FileOptions argument. For this argument, we have to pass in the FileOptions.Asynchronous flag: this tells the FileStream object that we intend to perform asynchronous read and write operations against the file. Note that a flag is simply just an algorithm that indicates the status of an operation. In this case, the FileOptions.Asynchronous flag is set to a binary one. To synchronously read bytes from a FileStream, we'd call its Read method, which is prototyped as follows: public Int32 Read(Byte[] array, Int32 offset, Int32 count) The Read method accepts a reference to a Byte[] that will have its bytes filled with bytes from the file. The count argument indicates the number of bytes that we need to read. The bytes will be placed in an array (buffer) between offset and (offset + count - 1). Note because we are dealing with a buffer, it will have a length. The Read method returns the number of bytes actually read from the file. When we call the method, the read occurs synchronously. That is, the method does not return until the requested bytes have been read into the array.
86
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; using System.Threading; namespace Asynchronous{ class Program { static void Main(string[] args) { byte[] buffer = new byte[100]; string filename = String.Concat(Environment.SystemDirectory, "\\ntdll.dll"); FileStream fs = new FileStream(filename, FileMode.Open, FileAccess.Read, FileShare.Read, 1024, FileOptions.Asynchronous); IAsyncResult result = fs.BeginRead(buffer, 0, buffer.Length, null, null); int numBytes = fs.EndRead(result); fs.Close(); Console.WriteLine("Read {0} Bytes:", numBytes); Console.WriteLine(BitConverter.ToString(buffer)); Console.ReadLine(); } } }
OutputRead 100 Bytes: 4D-5A-90-00-03-00-00-00-04-00-00-00-FF-FF-00-00-B8-00-00-00-00-00-00-0040-00-00 -00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-0000-00-0 0-00-00-00-00-00-00-D0-00-00-00-0E-1F-BA-0E-00-B4-09-CD-21-B8-01-4C-CD21-54-68-69-73-20-70-72-6F-67-72-61-6D-20-63-61-6E-6E-6F-74-20-62-65
CHAPTER V
87
.23
Web Services
Web services are the amalgamation of eXtensible Markup Language (XML) and HyperText Transfer Protocol HTTP that can convert. Web services are browsers and operating system independent service, which means it, can run on any browser without the need of making any changes. Web Services take Web-applications to the Next Level. In short,). 88. Benefits of Web Services Easier to communicate between applications Easier to reuse existing services Easier to distribute information to more consumers Rapid development
An
asp.net
web
service
is
class
we
write
that
inherits
from
the
System.Services.WebService.This class provides a wrapper for our service code. In this way; we are free to write web service in the same way we would write any other class and method. Following is the Web Service; it exposes two methods (Add and SayHello) as Web Services to be used by applications. This is a standard template for a Web Service. .NET Web Services uses the .asmx extension. Note that a method exposed as a Web Service has the WebMethod attribute. Save this file as filename.asmx in the IIS virtual directory (as explained in configuring IIS; for example, c:\MyWebServices). In the webservice include the add method.using System;
89
using System.Collections.Generic; using System.Linq; Using System. Web; using System.Web.Services; using System.Xml.Serialization; /// int Add(int a, int b) { return a + b; } [WebMethod] public string HelloWorld() {. 90
The "Virtual Directory Alias" screen opens. Type the virtual directory namefor example, MyWebServicesand click Next. The "Web Site Content Directory" screen opens. Here, enter the directory path name for the virtual directoryfor example, c:\MyWebServicesand clicks next. The "Access Permission" screen opens. Change the settings as per your requirements. Let's keep the default settings for this exercise. Click the Next button. It completes the IIS configuration. Click Finish to complete the configuration.
If it does not work, try replacing localhost with the IP address of your machine. If it still does not work, check whether IIS is running; you may need to reconfigure IIS and Virtual Directory..
Testing the Web Service As we have just seen, writing Web Services is easy in the .NET Framework. Writing Web Service consumers is also easy in the .NET framework; As said earlier, we will write two types of service consumers, one Web- and another Windows application-based consumer. Let's write our first Web Service consumer. Web-Based Service Consumer Write a Web-based consumer as given below. Call it WebApp.aspx. Note that it is an ASP.NET application. Save this in the virtual directory of the Web Service (c:\MyWebServices\WebApp.aspx). This application has two text fields that are used to get numbers from the user to be added. It has one button, Execute, that, when clicked, gets the Add and SayHello Web Services. WebApp.as();
91
} < 92 Writing());
93
Compile it using c:>csc /r:webservicename.dll WinApp.cs. It will create WinApp.exe. Run it to test the application and the Web Service. To check the web service created with console application, follow the steps : Write the above in a console application Right click on Solution Explorer, choose Add Web Reference In Add Web Reference window, choose Web Services on the local machine Type the URL as and deploy it
94
Windows Communication Foundation (Code named Indigo) is a programming platform and runtime system for building, configuring and deploying network-distributed services. It is the latest service oriented technology; Interoperability is the fundamental characteristic of WCF.WCF is a combined features of Web Service, Remoting, MSMQ and COM+. WCF provides a common platform for all .NET communication. Windows communication foundation service is primarily based on the
System.ServiceModel namespace. This is the Programming interface to the developers. The System.ServiceModel namespace is very rich in its design so that it allows much easier programming interface. Before getting into first programming sample of a WCF service, lets look into the programming model of the WCF service.
95
Programming Model: The Programming model mainly constitutes of the End points. The endpoint is the basic unit of communication in Windows Communication Foundation (WCF). Each endpoint is made up of three elements: an address, a binding, and a contract, Address Binding Contracts Address: Address specifies the info about where can I find my service. Binding: Binding specifies the info that how can I interact with my service. Contracts: Contracts specifies the info that how my service is implemented and what it can offer me. Now lets look into the various components that constitute the WCF service.
Address: Every WCF endpoint must have an address so that other end points can initialize the communication. Each Address consists of an Uri and Address properties. The address properties consist of an Identity and optional Headers. The endpoints can be added programmatically or in configuration files. The configuration files provide the flexibility of changing the endpoints in the future without a need to recompile the application and deploying it. Addresses will be of the format. Binding: Bindings describe how the endpoints communicate with the external world. The bindings specify the following: The protocol stack. The transport protocol Encoding. Contracts: 96
A contract specifies what the endpoint communicates. What the service can offer. It is a collection of messages. Called message exchange. (MEX) Open Visual Studio and create a new WCF Service Application:
In the existing solution, there is a Web.config file. Instead of editing it at this moment, it is recommended to delete it since we wont be using it for now. At the debugging stage, IDE will automatically generate a new one with just the debugging settings present. As we look through the existing code, we can see that there is a lot of sample data that might or might not be useful for us. Now we have a clean Service1 class and a clean IService1 interface. NOTE: Keep the default names for the service and related components as it is. Now, my service class looks like this: Using System; Using System.Runtime.Serialization; Using System.ServiceModel; Using System.ServiceModel.Web; namespace REST_TEST { public class Service1 : IService1 {
97
} } And the interface for the above mentioned service looks like this: using System; using System.Runtime.Serialization; using System.ServiceModel; using System.ServiceModel.Web; namespace REST_TEST { [ServiceContract] public interface IService1 { } }
Now, in the IService1 interface define a sample method, called GetDateTime that will return a string. The only thing that we should add to it is an additional set of attributes that will control how this method will be used by the WCF service. So we get, [ServiceContract] public interface IService1 { [WebGet(UriTemplate="/")] [OperationContract] string GetDateTime(); } With the help of WebGet, the method will be available through a REST call. The UriTemplate defines the access point, this being set to root. Therefore, when testing the service in the browser, the address to call the GetDateTime method will be. We can also modify this to something else, for example /GetDateTime 1. [WebGet(UriTemplate="/GetDateTime")]
98
This way, when we will access the service by its root identifier, we will get an Endpoint not found error (since there is no endpoint defined for that). However, if we access , we will call the needed method and get the correct output. OperationContract specifies that the method is a part of an operation that the service is bound to via a contract. We need to have this attribute present in the method definition (inside the interface) in order to be able to call it from the current service, Service1.svc. Now lets work on the actual method. public string GetDateTime() { return DateTime.Now.ToString(); } This will return the current date and time. Due to the nature of the service, the data will be serialized automatically when returned; therefore at this point we dont have to worry about manual serialization. Before going any further, we need to enable metadata publishing for the service. To do this, right click on Service1.svc in the Solution Explorer and select View Markup:
Once there, add the following property to the existing set: Factory="System.ServiceModel.Activation.WebServiceHostFactory" Now, right click on the Service1.svc in Solution Explorer once again and select View in Browser. Once the browser opens, we should see a response similar to this: 99
This is absolutely normal, since we did not define an endpoint for the root location. Now, in the address bar, right after the service name, type /GetDateTime. When we navigate to that address, here is what we should get Output:
Remote Objects, Clients, and Servers Before stepping into the details of the .NET Remoting architecture, this section looks briefly at a remote object and a client-server application that uses this remote object. After that the required steps and options are discussed in more detail 100
Remote Objects Remote objects are required for distributed computing. An object that should be called remotely from a different system must be derived from System.MarshalByRefObject. MarshalByRefObject of the remote object. The MarshalByRefObject class has, in addition to the inherited methods from the Object class, methods to initialize and to get the lifetime services. The lifetime services define how long the remote object lives. To see .NET Remoting in action, create a Class Library for the remote object. The class Hello derives from System.MarshalByRefObject. In the constructor a message is written to the console that provides information about the objects lifetime. In addition, add the method Greeting() that will be called from the client. To distinguish easily between the assembly and the class in the following sections, give them different names in the arguments of the method calls used. The name of the assembly is RemoteHello, and the class is named Hello: File->New->Project->Class library using System; namespace Wrox.ProCSharp.Remoting { public class Hello : System.MarshalByRefObject { public Hello() { Console.WriteLine("Constructor called"); } public string Greeting(string name) { Console.WriteLine("Greeting called"); return "Hello, " + name; } } 101
The ServerFor the server, create a new C# console application called HelloServer. To use the TcpServerChannel class, you have to reference the System.Runtime.Remoting assembly. Its also required that you reference the RemoteHello assembly that was created earlier.In Main () method, an object of type System.Runtime.Remoting.Channels.Tcp.TcpServerChannel is created with the port number 8086. This channel is registered with the System.Runtime.Remoting .Channels.ChannelServices class to make it available for remote objects. The remote object type is registered by calling the methodRemotingConfiguration.RegisterWellKnownServiceType(). In the example, the type of the remote object class, the URI used by the client, and a mode are specified. The mode WellKnownObject.SingleCall means that a new instance is created for every method call; in the sample application no state is held in the remote object. After registration of the remote object, it is necessary to keep the server running until a key is pressed: File->New->Project->Console Application using System; using System.Collections.Generic; using System.Runtime.Remoting; using System.Runtime.Remoting.Channels; using System.Runtime.Remoting.Channels.Tcp; namespace Wrox.ProCSharp.Remoting { class Program { static void Main() { TcpServerChannel channel = new TcpServerChannel(8086); ChannelServices.RegisterChannel(channel, true); RemotingConfiguration.RegisterWellKnownServiceType( typeof(Hello), "Hi", WellKnownObjectMode.SingleCall); Console.WriteLine("Press return to exit"); Console.ReadLine(); } } }
102
The Client The client is again a C# console application: HelloClient. With this project, you also have to reference the System.Runtime.Remoting assembly so that the TcpClientChannel class can be used. In addition, you have to reference the RemoteHello assembly. Although the object will be created on the remote server, the assembly is needed on the client for the proxy to read the type information during runtime. In the client program, create a TcpClientChannel object thats registered in ChannelServices. For the TcpChannel, you can use the default constructor, so a free port is selected. Next, the Activator class is used to return a proxy to the remote object. The proxy is of type System.Runtime.Remoting.Proxies .__TransparentProxy. This object looks like the real object because it offers the same methods. The transparent proxy uses the real proxy to send messages to the channel: File->New->Project->Console Application using System; using System.Runtime.Remoting.Channels; using System.Runtime.Remoting.Channels.Tcp; namespace Wrox.ProCSharp.Remoting { class Program { static void Main() { Console.WriteLine("Press return after the server is started"); Console.ReadLine(); ChannelServices.RegisterChannel(new TcpClientChannel(), true); Hello obj = (Hello)Activator.GetObject( typeof(Hello), "tcp://localhost:8086/Hi"); if (obj == null) { Console.WriteLine("could not locate server"); return; } for (int i=0; i< 5; i++) { Console.WriteLine(obj.Greeting("Christian")); } } } }
103
Now you can start the server and then the client program
Output: Server
Client
104 already support asynchronous methods Messaging Queuing and asynchronous programming: Message Queuing can be done in a disconnected environment. At the time data is sent, the receiver can be offline. Later, when the receiver goes online, it receives the data without the sending application to intervene. You can compare connected and disconnected programming with talking to someone on the phone and sending an e-mail. When talking to someone on the phone, both participants must be connected at the same time. The communication is synchronous. With an e-mail, the sender isnt sure when the e-mail will be dealt with. People using this technology are working in a disconnected mode. Of course the e-mail may never be dealt with - it may be ignored. Thats in the nature of disconnected communication. To avoid this problem, it is possible to ask for a reply to confirm that the e-mail has been read. If the answer doesnt arrive within a time limit, you may be required to deal with this exception. This is also possible with Message Queuing. In some ways Message Queuing is e-mail for application-to-application communication, instead of person-to-person communication. However, this gives you a lot of features that are not available with mailing services, such as guaranteed delivery, transactions, confirmations, express mode using memory, and so on. As we see in the next section, Message Queuing has a lot of features useful for communication between applications
Creating a Message Queue Message queues can be created programmatically with the Create () method of the MessageQueue class. With the Create () method, the path of the new queue must be passed. The path consists of the host name where the queue is located 105 and the name of the queue.); } } } }
Creating a Message Queue (Client program) In C#, we can also create Message Queues programmatically using the Create() method of MessageQueue class. With Create () method, the path of the new queue must be passed. The path consists of the hostname where the queue is located and the name of the queue. Now we will write an example, which will create on the localhost that is 'MynewPublicQueue'. To create a private queue, the path name must include private$. For example: \private$\MynewPrivateQueue. Once the create () method is invoked, properties of the queue can be changed. Using label property, 106
the label of the queue is set to "First Queue". The following program writes the path of the queue and the format name to the console. The format name is automatically created with a UUID (Universal Unique Identifiers) that can be used to access the queue without the name of the server.
using System.Messaging; namespace FirstQueue { class Program { static void Main(string[] args) { using (MessageQueue queue = MessageQueue.Create(@".\Private$\FirstQueuessnsv")) { queue.Label = "Demo Queue"; Console.WriteLine("Queue created:"); Console.WriteLine("Path: {0}", queue.Path); Console.WriteLine("FormatName: {0}", queue.FormatName); } Console.ReadLine(); } } }
Sending a Message (Server program) By using the Send method of the MessageQueue class, you can send the message to the queue. The object passed as an argument of the Send () Method is serialized queue. The Send () method is overloaded so that a Label and a MessageQueueTransaction object can be passed. Now we will write a small example to check if the queue exists and if it doesn't exist, a queue is created. Then the queue is opened and the message "First Message" is sent to the queue using the Send () method.
The path name specifies "." for the server name, which is the local system. Path name to private queues only works locally. Add the following code in the Visual Studio C# console environment.
107
namespace sending message { class Program { static void Main(string[] args) { try { if (!MessageQueue.Exists(@".\Private$\FirstQueuessnsv")) { MessageQueue.Create(@".\Private$\FirstQueuessnsv"); } MessageQueue queue = new MessageQueue(@".\Private$\FirstQueuessnsv"); queue.Send("First Message ", " Label "); } catch (MessageQueueException ex) { Console.WriteLine(ex.Message); } Console.ReadLine(); } } }
Build the application. Now we can view the message in the Component Management admin tool[control panel->administrative tools->service and application and message queuing->private queue-<Queue name>] as shown in figure above. By opening the message and selecting the body tab of the dialog, we can see the message was formatted using XML. certificates and other custom binary and XML-based security tokens. In addition username/password credentials can be used for authentication purposes. An enhanced security model provides a policy-driven foundation for securing Web services. WSE also supports 108
the ability to establish a trust-issuing service for retrieval and validation of security tokens, as well as the ability to establish more efficient long-running secure communication via secure conversations.
The WS-Security Specification WS-Security describes independent mechanisms to: describe assertions made about security determine if a message has been altered (message integrity) Prevent a message from being read by an unauthorized party (message confidentiality).
These mechanisms can be used to provide end-to-end security for a message. Unlike SSL which only provides encryption between two points, message security must apply to all the intermediates through which a message flows. Consider an order that flows from an order entry application, to an inventory control system that checks for availability, to a billing system that processes the order, to a fulfillment system that ships it. Your credit card information should be encrypted so that only the billing system can read it. The inventory control system needs only to know the items requested and their quantity. If each of these intermediates is a separate division of a company, or a separate company, this message is said to cross a trust domain. Trust is the degree to which one entity believes the claims of another, or allows another to undertake actions on its behalf. Since different companies or divisions trust each other to varying degrees, the security models built with WSSecurity have to allow for varying levels of trust. In our mortgage example, the Credit Agency, the Bank, and the Loan officer are different trust domains. The level of trust between the Mortgage decision service and the Bank Portal is much higher, than the level of trust between the Credit Agency and the Bank, or the Bank and the Loan officer. Each of these mechanisms uses a security token to represent a security claim. For example, an X509 certificate represents an association between a public key and an identity that is verified by some third party. An X509 security token represents this association. If you know how WSE 2.0 implements these security tokens, you will understand how WSE 2.0 implements WS-Security. Each security token is associated with a particular type of security protocol such as username and password, X509 certificates, Kerberos or SAML. Each of these protocols has an associated specification. For example, X509 certificates are described in the Web Services Security X509 Certificate Token Profile specification. These tokens and the associated information are placed in the SOAP headers associated with the SOAP message as described in previous articles.
109
It is also important to be clear about what WS-Security and its associated specifications do not do. The specification tells you how to place a security token such as username and password, or X509 certificate in a SOAP message. How you use this token to authenticate a user is up to you. You can decide to store your passwords in plain text in a file that is available to everyone. You can decide to ignore an X509 certificate that is present in a given message. Although the specification recommends you do not do so, you can accept the X509 certificate even if you know it is no longer valid. While the specification allows you to implement different trust levels for different trust domains, it does not tell you how to determine those different levels of trust. You decide how the security protocols you use are implemented. WS-Security does not tell you how to develop a security model. While a discussion of security models is beyond the scope of these articles, one example should make clear their importance. Any sophisticated electronic commerce application has to deal with non-repudiation. When you make a purchase at a store's physical location, and sign a credit card slip, that signature can be used to prevent you from repudiating, or claiming you never made that transaction. WS-Security tells you how to place a digital signature in a message, but it does not tell you how to implement nonrepudiation in your application.
WSE Policy Framework The WSE policy framework describes constraints and requirements for communicating with an ASMX (ASP.NET Web services framework) or WSE Web service. Policy is a deployment-time concept. An instance of a policy is translated into run-time components that apply the requirements at the sender or enforce them at the receiver. This process of translation, referred to as policy compilation, takes place before any messages are exchanged between the client and the service. The atomic run-time components are called SOAP filters and operate at the level of a single SOAP envelope. Their primary responsibility is to inspect or transform the SOAP envelope. For example, a SOAP filter on the sender may digitally sign the message and add a Security header to it, as defined in the WS-Security specification. Similarly, a SOAP filter on the receiver may extract the Security header from the SOAP envelope and require that the digital signature it contains be successfully verified before dispatching the message to the application code.
Using the WSE Router One of the benefits of using WSE router with a distributed application is that a computer hosting a Web service can be taken offline for maintenance without modifying either the client code 110
or its configuration. An administrator of the computer hosting WSE router can make all the changes needed to redirect the SOAP messages to another computer. To do so, an administrator prepares a backup computer to be brought online to host the Web service, while the router continues to delegate SOAP messages to the primary computer. Then the administrator prepares a Web.config file that specifies the file containing the referral cache and a new referral cache containing the URL for the backup computer. A referral cache is an XML file that contains the ultimate destination for URLs received by the router. Then, when the primary computer is taken offline, the Web.config and referral cache are placed on the computer hosting the router. Subsequent SOAP messages are then dynamically routed to the backup computerall unbeknownst to the client application, which is still sending SOAP messages to the router. The following graphic depicts sending SOAP messages to a WSE router, which delegates the SOAP message to another Web server based on the contents of a referral cache. In this graphic, the referral cache is configured to send the SOAP message to Web server B, but the referral cache could be modified to send it to Web server C..
We use the term language-integrated query to indicate that query is an integrated feature of the developer's primary programming languages (for example, Visual C#, Visual Basic). Language-integrated query allows query expressions to benefit from the rich metadata, compiletime syntax checking, static typing and IntelliSense that was previously available only to imperative code. Language-integrated query also allows a single general purpose declarative query facility to be applied to all in-memory information, not just information from external sources.
Example 1 111
To see language-integrated query at work, we'll begin with a simple); } }
Output: BURKE DAVID FRANK Explanation of language-integrated query in the first statement of our program. IEnumerable<string> query = from s in names where s.Length == 5 Orderby s select s.ToUpper();. Example 2
112
1. Create a new website and add a class by right clicking on the solution explores and Add New Item. Name the classname as Location.cs 2. Enter the code as:using System; using System.Collections.Generic; using System.Web; public class Location { // Fields private string _country; private int _distance; private string _city; // Properties public string Country { get { return _country; } set { _country = value; } } public int Distance { get { return _distance; } set { _distance = value; } } public string City { get { return _city; } set { _city = value; } } }
113
<HeaderStyle BackColor="Tan" Font- <AlternatingRowStyle BackColor="PaleGoldenrod" /> </asp:GridView> </form> </body> </html>
new Location { City="London", Distance=4789, Country="UK" }, new Location { City="Amsterdam", Distance=4869, Country="Netherlands" }, new Location { City="San Francisco", Distance=684, Country="USA" }, new Location { City="Las Vegas", Distance=872, Country="USA" }, new Location { City="Boston", Distance=2488, Country="USA" }, new Location { City="Raleigh", Distance=2363, Country="USA" }, new Location { City="Chicago", Distance=1733, Country="USA" }, new Location { City="Charleston", Distance=2421, Country="USA" }, new Location { City="Helsinki", Distance=4771, Country="Finland" }, new Location { City="Nice", Distance=5428, Country="France" }, new Location { City="Dublin", Distance=4527, Country="Ireland" } }; GridView1.DataSource = from location in cities where location.Distance > 1000 orderby location.Country, location.City
114
5. Compile and Run the application Output Cities and their Distances Country City Distance from Seattle Finland Helsinki 4771 France Nice 5428 Ireland Dublin 4527 Netherlands Amsterdam 4869 UK London 4789 USA Boston 2488 USA Charleston 2421 USA Chicago 1733 USA Raleigh 2363. We can use the new DataPager control to implement paging functionality to our data controls seamlessly. The ListView control uses template for displaying data. However, it supports many additional templates that allow for more scenarios when working with our data. The ListView control can be binded to the SQL. LayoutTemplate - The root template that defines a container object, such as a table, div, or span element, that will contain the content defined in the ItemTemplate or GroupTemplate template. It might also contain a DataPager object. ItemTemplate - Defines the data-bound content to display for individual items. ItemSeparatorTemplate - Defines the content to render between individual items. GroupTemplate - Defines a container object, such as a table row (tr), div, or span element, that will contain the content defined in the ItemTemplate and EmptyItemTemplate templates. The number of 115
items that are displayed in a group is specified by the GroupItemCount property. GroupSeparatorTemplate - Defines the content to render between groups of items. EmptyItemTemplate - Defines the content to render for an empty item when a GroupTemplate template is used. For example, if the GroupItemCount property is set to 5, and the total number of items returned from the data source is 8, the last group of data displayed by the ListView control will contain three items as specified by the ItemTemplate template, and two items as specified by the EmptyItemTemplate template. EmptyDataTemplate - Defines the content to render if the data source returns no data. SelectedItemTemplate - Defines the content to render for the selected data item to differentiate the selected item from other items. AlternatingItemTemplate - Defines the content to render for alternating items to make it easier to distinguish between consecutive items. EditItemTemplate - Defines the content to render when an item is being edited. The EditItemTemplate template is rendered in place of the ItemTemplate template for the data item that is being edited. InsertItemTemplate - Defines the content to render to insert an item. The InsertItemTemplate template is rendered in place of an ItemTemplate template at either the start or at the end of the items that are displayed by the ListView control. You can specify where the InsertItemTemplate template is rendered by using the InsertItemPosition property of the ListView control. Data Pager DataPager Web control is used to page data and display navigation controls for data-bound controls that implement the IPageableItemContainer interface.ListView implements the IPageableItemContainer and will use DataPager to support Paging. Here we are using a database table ds with a column name. File->New->Web Site In the source file type the below code:
<asp:SqlDataSource</asp:SqlDataSource> <asp:ListView
116
<LayoutTemplate> <asp:DataPager <Fields> <asp:NumericPagerField </Fields> </asp:DataPager> <div ID="itemPlaceholderContainer" runat="server" style="font-family: Verdana, Arial, Helvetica, sans-serif;"> <span ID="itemPlaceholder" runat="server" /> </div> <div style="text-align: center;background-color: #CCCCCC;font-family: Verdana, Arial, Helvetica, sans-serif;color: #000000;"> </div> </LayoutTemplate> <ItemTemplate> <span style="background-color: #DCDCDC;color: #000000;">Name: <asp: Label <br /> <br /> </span> </ItemTemplate> </asp:ListView>
Output
117
CHAPTER VIDeveloping database Applications using ADO.NET and XML
.30 XMLXML (eXtensible Markup Language): A markup language for organizing information in text files. XML File: A text file that contains information organized in a structure that meets XML standards.
Main features of XML XML files are text files, which can be managed by any text editor. XML is very simple, because it has less than 10 syntax rules. XML is extensible, because it only specifies the structural rules of tags. No specification on tags them self.
Advantages of XML XML provides a basic syntax that can be used to share information between different kinds of computers, different applications, and different organizations. XML data is stored in plain text format. This software- and hardware-independent way of storing data allows different
118
incompatible systems to share data without needing to pass them through many layers of conversion. This also makes it easier to expand or upgrade to new operating systems, new applications, or new browsers, without losing any data. XML provides a gateway for communication between applications, even applications on wildly different systems. As long as applications can share data (through HTTP, file sharing, or another mechanism), and have an XML parser, they can share structured information that is easily processed. It supports Unicode, allowing almost any information in any written human language to be communicated. It can represent common computer science data structures: records, lists and trees. Its self-documenting format describes structure and field names as well as specific values. XML is heavily used as a format for document storage and processing, both online and offline. The hierarchical structure is suitable for most (but not all) types of documents.
It is platform-independent, thus relatively immune to changes in technology How to create your first XML file. 1. Use any text editor to enter the following lines of text into a file:<? Xml version="1.0" <p>Hello world! </p>
encoding="UTF-8?>
2. Save this file with name: "hello.xml". We have successfully created an XML file. The <? xml ...?> statement is called the Processing Instruction. Every XML file must contain one "xml" processing instruction at the beginning of the file to declare that this file is an XML file. Version - A required attribute that specifies the version of the XML standard this XML file conforms to. Currently, there are two versions of the XML standard available: 1.0 and 1.1. Encoding - An optional attribute that specifies the character encoding schema used in this XML file. Currently, there are many encodings supported by most XML applications: UTF8, UTF-16, ISO-10646-UCS-2, ISO-10646-UCS-4, and ISO-8859-1... ISO-8859-9, ISO2022-JP, Shift_JIS, EUC-JP.
119
What is XML Schema? XML Schema is an XML schema language recommended by W3C (World Wide Web Consortium) in 2001. An XML schema is a set of rules to which an XML document must conform in order to be considered 'valid' according to that schema. An XML schema language is a set of rules on how to write an XML schema. An XML Schema describes the structure of an XML document. What is XSD? XSD stands for XML Schema Definition. An XML schema document written in XML Schema is called XSD and typically has the filename extension ".xsd". Formally, XML Schema and XSD refer to two different things: XML Schema refers to the schema language and XSD refers to a schema document written in XML Schema. But XML Schema is sometimes informally referenced as XSD. Creating Schema Documents - "schema" Element We have two options 1. empty_default_namespace.xsd - Using the default namespace:<?xml version="1.0"?> <schema xmlns=""> </schema>
Declaring Root Elements - "element" Element Every XML document must have a root element. So the first thing we need to do in a schema is declare a root element for the conforming XML documents.. 120
Rule 3. The namespace of all elements used in the XML representation must be from the schema namespace, 'http:'.
A simple schema example represented as an XML document, word.xsd <?xml version="1.0"?> <xs:schema xmlns: <xs:element </xs:schema>
It declares that the conforming XML document must have a root element called "word" Example 1: Let's look at a simple example of XSD (a XML schema written in XML Schema language). Here is the content of the XSD example file, hello.xsd: <?xml version="1.0"?> <xsd:schema xmlns: <xsd:element </xsd:schema> Code Explanation :
This XSD file is a true XML document with the root element named as "schema". This XSD file contains one "Element Declaration" statement <element ...>. The "Element Declaration" statement declares that the conforming XML documents must have a root element named as "p".
The "Element Declaration" statement also declares that the root element "p" should have content of "string" type.
Obviously, hello.xsd is a very simple XML schema. Writing an XML document that conforms to hello.xsd is easy. There is an example, hello.xml
Example 2>
<?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet <xsl:output <xsl:template <xsl:copy> <xsl:apply-templates </xsl:copy> </xsl:template>
<xsl:template <html> <body> <h2>Books</h2> <table border="1"> <tr bgcolor #bc03d6"> <th>Title</th> </tr> <xsl:for-each <tr> <td> <xsl:value-of </td>
.33TOP
Books
Example 2 123
1. Create a website and add an XML file with code as below <?xml version="1.0" encoding="utf-8" ?> <?xml-stylesheet type="text/xsl" href="XSLTFile4.xslt"?> <!-- Edited by XMLSpy --> <catalog> <cd> <name>Rose</name> <color>Red</color> <country>USA</country> <company>Columbia</company> <price>10.90</price> <year>1985</year> </cd> <cd> <name>Jasmine</name> <color>White</color> <country>UK</country> <company>CBS Records</company> <price>9.90</price> <year>1988</year> </cd> </catalog>
xmlns: <xsl:template <html> <body> <h2 style="Margin-left=55px;">My Flower Collection</h2> <table border="1"> <tr bgcolor="#9acd32"> <th>Name</th> <th>Color</th> <th>Country</th> <th>Company</th> <th>Price</th> <th>Year</th> </tr>
124
>
My Flower CollectionName Color Country Company Price Year Rose Red USA Columbia 10.90 1985 Jasmine White UK CBS Records 9.90 1988
125
authors/author/* Using additional patterns within square brackets can specify branches on the path. For example, the following query describes a branch on the <author> element, indicating that only the <author> elements with <nationality> children should be considered as a pattern match. authors/author[nationality]/name This becomes even more useful for the sample data when comparisons are added. The following query returns the names of Russian authors. Note comparisons can be used only within brackets. authors/author[nationality='Russian']/name Attributes are indicated in a query by preceding the name of the attribute with "@". The attribute can be tested as a branch off the main path, or the query can identify attribute nodes. The following examples return authors from the classical period, and just the two period attributes, respectively. authors/author[@period="classical"] authors/author/@period
XmlTextReader class is used to read Extensible Markup Language (XML) from a file. XmlTextReader provides direct parsing and tokenizing of XML and implements the XML 1.0 specification as well as the namespaces in the XML specification from the World Wide Web Consortium (W3C).TextReader, XmlNodeReader, and XmlValidatingReader classes are defined from the XmlReader class Reading an XML file XmlTextReader reader = new XmlTextReader(@"C:\Documents and Settings\PuranMAC\My Documents\Visual Studio 2008\Projects\ConsoleApplication2\ConsoleApplication2\XMLFile1.xml"); Console.WriteLine("General Information"); Console.WriteLine("= = = = = = = = = "); Console.WriteLine(reader.Name); Console.WriteLine(reader.BaseURI); Console.WriteLine(reader.LocalName);
127
The Node Type property returns the type of the current node in the form of XmlNodeType enumeration: XmlNodeType type = reader.NodeType; Which defines the type of a. Example 1. Create a console applicationusing System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml; namespace xmlreaderwrit { class Program { static void Main(string[] args) { XmlTextReader reader = new XmlTextReader("books.xml"); while (reader.Read()) { switch (reader.NodeType) { case XmlNodeType.Element: // The node is an element. Console.Write("<" + reader.Name); Console.WriteLine(">"); break; case XmlNodeType.Text: //Display the text in each element. Console.WriteLine(reader.Value); break; case XmlNodeType.EndElement: //Display the end of the element. Console.Write("</" + reader.Name); Console.WriteLine(">"); break; } } Console.ReadLine(); } } }
128
</first-name> <last-name> Franklin </last-name> </author> <price> 8.99 </price> </book> </bookstore>
3. In the projects folder, copy books.xml file and paste into Bin ---> Debug 4. Run the console application. Output<bookstore> <book> <title> The Autobiography of Benjamin Franklin </title> <author> <first-name> Benjamin </first-name> <last-name> Franklin </last-name> </author> <price> 8.99 </price> </book> </bookstore>
The XmlWriter and XmlTextWriter classes are defined in the System.XML namespace. The XmlTextWriter class is derived from XmlWriter class, which represents a writer that provides fast non-cached forward-only way of generating XML documents based on the W3C Extensible Markup Language (XML) 1.0 specification. Since Xml classes are defined in the System.XML namespace, so first thing you need to do is to Add the System.XML reference to the project. Using System.Xml; XmlWriter is an abstract base class that defines an interface for writing XML. The following list shows that the XmlWriter has methods and properties defined to:
129
Write well-formed XML. Manage the output, including methods to determine the progress of the output, with the WriteState property.
Flush or close the output. Write valid names, qualified names, and name tokens
using System.Xml; namespace writerx { class Program { static void Main(string[] args) { XmlTextWriter XmlWriter = new XmlTextWriter(Console.Out); XmlWriter.WriteStartDocument(); XmlWriter.WriteComment("This is the comments."); XmlWriter.WriteStartElement("BOOK"); XmlWriter.WriteElementString("TITLE", "this is the title."); XmlWriter.WriteElementString("AUTHOR", "I am the author."); XmlWriter.WriteElementString("PUBLISHER", "who is the XmlWriter.WriteEndElement(); XmlWriter.WriteEndDocument(); Console.ReadLine(); }
publisher");
} }
130
<?xml version="1.0" encoding="IBM437"?><!--This is the comments.--><Book><TITLE>this is the title.</Title><Author>I am the author.</Author><PUBLISHER>who is the publisher</PUBLISHER></BOOK>
Difference between Xml writer and Xml reader XmlReaderRepresents a reader that provides fast, non-cached, forward-only access to XML data. XmlWriter is totally different from above; it is only used to write the xml doc.
What is. DOM API. The DOM API is ideal when we want to manage XML data or access a complex data structure repeatedly. The DOM API: Builds the data as a tree structure in memory Parses an entire XML document at one time Allows applications to make dynamic updates to the tree structure in memory. (As a result, we could use a second application to create a new XML document based on the updated tree structure that is held in memory.)
131
Using the DOM API preserves the structure of the document (and the relationship between elements) and does the parsing up front so that we do not have to do the parsing process over again each time we access a piece of data. If you choose to validate your document, we can be assured that the syntax of the data is valid while we are working with it. However, the DOM API requires the memory needed to store the data, which can be expensive in terms of machine cycles. In addition, the DOM API is, by nature, a two-step process: It parses the entire XML document. Applications interact with the XML data held in memory using the C/C++ APIs. As a result, we cannot begin working with the data until the DOM API has completely parsed the entire document.
The System.Xml.XmlDocument class implements the core XML Document Object Model (DOM) parser of the .NET Framework. XML content can be classified broadly into a collection of nodes and attributes of those nodes. You can use the DOM model implementation of System.Xml to query or to access these nodes and attributes of XML documents. The DOM implementation of System.Xml provides several ways to access these nodes and the attributes. The .NET DOM implementation (System.Xml.XmlDocument) supports all of DOM Level 1 and all of DOM Level 2 Core, but with a few minor naming changes. As mentioned earlier, central to the DOM API is the DOM tree data structure. The tree consists of nodes or elements. All nodes of a typical XML document (elements, processing instructions, attributes, text, comments, and so on) are represented in the DOM tree. Because the DOM tree is built in memory for the entire document, we are free to navigate anywhere in the document.
Example 1. Use Notepad or a similar text editor to save the following data as a file named E:\file1.xml: <?xml version='1.0' encoding='ISO-8859-1' standalone='yes'?> <Collection> <Book Id='1' ISBN='1-100000ABC-200'> <Title>Principle of Relativity</Title> <!-- Famous physicist --> <Author>Albert Einstein</Author> <Genre>Physics</Genre> </Book> <Book Id='2' ISBN='1-100000ABC-300'> <!-- Also came as a TV serial --> 132
<Title>Cosmos</Title> <Author>Carl Sagan</Author> <Genre>Cosmology</Genre> </Book> <!-- Add additional books here --> </Collection> 2. Create a website and open a Console Application with the following codeusing System; using System.Xml; using System.Text; namespace ConsoleApplication1 { class Class1 { public static int GetNodeTypeCount(XmlNode node, XmlNodeType nodeType) { // Recursively loop through the given node and return // the number of occurences of a specific nodeType. int i = 0; if (node.NodeType == nodeType) i = i + 1; if (node.HasChildNodes) foreach (XmlNode cNode in node.ChildNodes) i = i + GetNodeTypeCount(cNode, nodeType); } return i;
[STAThread] static void Main(string[] args) { try { // Create an Xml document instance and load XML data. XmlDocument doc = new XmlDocument(); doc.Load("E:\\file1.xml"); // 1. Select all the Book titles by using an XPath query. XmlNodeList nodeList = doc.SelectNodes("//Book/Title"); XmlNode node; Console.WriteLine("{0}", "TITLES LIST: "); foreach (XmlNode nd in nodeList) Console.WriteLine("{0}", nd.InnerText); // 2. Read the XmlDeclartion values. XmlDeclaration decl = (XmlDeclaration)doc.FirstChild; Console.WriteLine("\n{0}", "XML DECLARTION:"); Console.WriteLine("{0}", "Version " + "= " + Console.WriteLine("{0}", "Encoding " + "= " +
decl.Version); decl.Encoding);
Console.WriteLine("{0}", "Standalone " + "= " + decl.Standalone); // 3. Move to the first node of DOM and get all of its
133
attributes.
XmlElement root = doc.DocumentElement; node = root.FirstChild; Console.WriteLine("\n{0}", "ATTRIBUTES OF THE FIRST foreach (XmlAttribute attr in node.Attributes) Console.WriteLine("{0}", attr.Name + " = " + // 4. Navigate to the child nodes of the first Book node. Console.WriteLine("\n{0}", "FIRST NODE'S CHILDREN:"); if (node.HasChildNodes) foreach (XmlNode cNode in node.ChildNodes) Console.WriteLine("{0}", cNode.OuterXml); // 5. Navigate to the next sibling of the first Book node. node = node.NextSibling; Console.WriteLine("\n{0}", "NEXT SIBLING:"); if (node != null) Console.WriteLine("{0}", node.OuterXml);
CHILD:"); attr.InnerText);
// 6. Get the parent node details of the current node. Console.WriteLine("\n{0}", "PARENT NODE NAME = " + node.ParentNode.Name); Console.WriteLine("{0}", "PARENT NODE HAS " + node.ParentNode.ChildNodes.Count + " CHILD NODES"); Console.WriteLine("{0}", "PARENT NODE'S NAMESPACE URI = " + node.ParentNode.NamespaceURI); // 7. Count the number of Comment nodes in the document. // You could search for other types in the same way. int commentNodes = Class1.GetNodeTypeCount(doc.DocumentElement, XmlNodeType.Comment); Console.WriteLine("\n{0}\n", "NUMBER OF COMMENT NODES IN THE DOC = " + commentNodes); Console.ReadLine(); } catch (XmlException xmlEx) // Handle the XML Exceptions here. { Console.WriteLine("{0}", xmlEx.Message); } catch (Exception ex) // Handle the Generic Exceptions here. { Console.WriteLine("{0}", ex.Message); } Console.ReadLine(); } } }
Compile and then run the application. NOTE: The file file1.xml should be in the same folder as the executable file (or we can modify the file path in the code). Output TITLES LIST: Principle of Relativity 134
Cosmos XML DECLARTION: Version = 1.0 Encoding = ISO-8859-1 Standalone = yes ATTRIBUTES OF THE FIRST CHILD: Id = 1 ISBN = 1-100000ABC-200 FIRST NODE'S CHILDREN: <Title>Principle of Relativity</Title> <!-- Famous physicist --> <Author>Albert Einstein</Author> <Genre>Physics</Genre> NEXT SIBLING: <Book Id="2" ISBN="1-100000ABC-300"><!-- Also came as a TV serial --><Title>Cosm os</Title><Author>Carl Sagan</Author><Genre>Cosmology</Genre></Book> PARENT NODE NAME = Collection PARENT NODE HAS 3 CHILD NODES PARENT NODE'S NAMESPACE URI = NUMBER OF COMMENT NODES IN THE DOC = 3 Example 2 Create a new project and a console application. The code for this file is :using using using using using using using System; System.Collections.Generic; System.Linq; System.Text; System.Xml; System.IO; System.Collections;
namespace dom { class Program { Hashtable attributes; public void Person() { attributes = new Hashtable(); attributes.Add("id", null); attributes.Add("ssn", null); // attributes.Add("FirstName", null); attributes.Add("LastName", null); attributes.Add("City", null); attributes.Add("State", null);
135
attributes.Add("Street", null); attributes.Add("ZipCode", null); attributes.Add("Title", null); attributes.Add("Description", null); } public object GetID() { return attributes["id"]; } // public object GetFirstName() { return attributes["FirstName"]; } public object GetLastName() { return attributes["LastName"]; } public object GetSSN() { return attributes["ssn"]; } public object GetCity() { return attributes["City"]; } public object GetState() { return attributes["State"]; } public object GetStreet() { return attributes["Street"]; } public object GetZip() { return attributes["ZipCode"]; } public object GetTitle() { return attributes["Title"]; } public object GetDescription() { return attributes["Description"]; } public void SetID(object o) { attributes["id"] = o; } //public void SetFirstName(object o) { attributes["FirstName"] = public void SetLastName(object o) { attributes["LastName"] = o; } public void SetSSN(object o) { attributes["ssn"] = o; } public void SetCity(object o) { attributes["City"] = o; } public void SetState(object o) { attributes["State"] = o; } public void SetStreet(object o) { attributes["Street"] = o; } public void SetZip(object o) { attributes["ZipCode"] = o; } public void SetTitle(object o) { attributes["Title"] = o; } public void SetDescription(object o) { attributes["Description"] = o; } public void ToXml() { XmlTextWriter tw = new XmlTextWriter(Console.Out); XmlNode childNode = null; tw.Formatting = Formatting.Indented; XmlDocument doc = new XmlDocument(); XmlNode node = doc.CreateElement("Person"); AppendAttribute(doc, node, "id"); AppendAttribute(doc, node, "ssn"); doc.AppendChild(node); childNode = AppendElement(doc, node, "Name", true); AppendElement(doc, childNode, "LastName", false); // AppendElement(doc, childNode, "FirstName", false); childNode = AppendElement(doc, node, "Address", true); AppendElement(doc, childNode, "Street", false); AppendElement(doc, childNode, "City", false); AppendElement(doc, childNode, "State", false); AppendElement(doc, childNode, "ZipCode", false); childNode = AppendElement(doc, node, "Job", true); AppendElement(doc, childNode, "Title", false); AppendElement(doc, childNode, "Description", false);
o; }
136
doc.WriteContentTo(tw);
private XmlNode AppendElement(XmlDocument doc, XmlNode parent, string name, bool containerElement) { XmlNode child = doc.CreateNode(XmlNodeType.Element, name, ""); if (!containerElement) { child.AppendChild(doc.CreateTextNode( (string)attributes[name])); } parent.AppendChild(child); return child; } private void AppendAttribute(XmlDocument doc, XmlNode parent, string name) { XmlAttribute child = doc.CreateAttribute(name); child.Value = (string)attributes[name]; parent.Attributes.Append(child); } static void Main(string[] args) { Program p = new Program(); p.Person(); // p.SetFirstName("Jack"); p.SetID("jnicholson"); p.SetSSN("123456789"); p.SetLastName("Nicholson"); p.SetStreet("101 Acting Blvd"); p.SetCity("Beverly Hills"); p.SetState("CA"); p.SetZip("90210"); p.SetTitle("Actor"); p.SetDescription("Acted as Colonel NathanJessop in A Few Good Men"); p.ToXml(); Console.ReadLine(); } } }
137
Description: The ToXml() uses the AppendAttribute and AppendElement methods to abstract out the creation of elements and attributes. We start by creating the XmlTextWriter and signaling it to pretty-print the XML to Console.Out: XmlTextWriter tw = new XmlTextWriter(Console.Out); XmlNode childNode = null; tw.Formatting = Formatting.Indented; Next, we create the XmlDocument object to which all the elements will be appended. We create a Person node from the document object, which automatically appends the node to the document. Next, we use the AppendAttribute call to set the attributes of the _Person node: XmlNode node = doc.CreateElement("Person"); AppendAttribute(doc, node, "id"); The AppendAttribute method creates an attribute, sets its value, and then appends the attribute node to the collection of attributes of the _parent node. Note that in the .NET API, an attribute is a type of node and hence must be appended to the parent node just as any other child node is:
138
XmlAttribute child = doc.CreateAttribute(name); child.Value = (string)attributes[name]; parent.Attributes.Append(child); Next, we use repetitive calls to the AppendElement method to append the element nodes to the document. The method creates an element node and sets its value from the object if it is not a container node. (A container node is an element node that is not a text node; that is, it contains other element nodes as its children.) The method returns the child node created, and thus successive calls to the AppendElement method will append new element nodes to the most recent child node: XmlNode child = doc.CreateNode(XmlNodeType.Element,name,""); if (!containerElement) { child.AppendChild(doc.CreateTextNode((string)attributes[_name])); } parent.AppendChild(child); return child;
.38 ADO.NETTOP
ADO .NET is a .Net version of ADO. But nothing seems to be similar in both. ADO is connected data-access approach and ADO .NET is disconnected data-access approach which is totally different. In ADO, datasource connection stays open for the lifetime of the application which leads to concerns like database security, network traffic and system performance.In contrary, ADO .NET uses disconnected architecture i.e datasource connection is kept open in need. ADO .NET is nothing but data-access component which is being shipped with .NET. EVOLUTION Lets step into its evolution. Traditional data processing depends primarily on a connected, two tier model. As of today data processing uses multi-tier architecture so there is a need to switch over to a disconnected approach for better scalability. Todays web application model uses XML to encode data and HTTP as a medium of communication between tiers. So maintaining state between requests is mandatory (connected approach doesnt require to handle state, where connection is open for programs lifetime.). So ADO .NET, which works with any component on any platform 139
that understands XML, was designed for this new programming model which comprises: Disconnected data architecture Tight integration with XML Common data representation ADO .NET ARCHITECTURE ADO .Net uses structured process flow containing components. Data in datasource is retrieved using various component of data provider and provide data to an application, then update changes back to database. ADO .NET classes are in system. Data namespace and used in application by importing, for instance, Imports system.data (VB) Using system.data (C#) ADO .NET clearly factors data access from data manipulation.
ADO.NET Key Components 1. The Dataset Object 2. The Data Provider Object The Dataset Object It is an in-memory representation of data. It can be used with multiple and differing data sources such as XML file or stream, data local to application.. Dataset can be used if an application meets any of the following requirements: Remote data access Multiple and differing data sources Cache data local to the application Provide hierarchical XML view of a relational data and use XSL,Xpath tools Disconnected data access. 140
The Data Provider Object Data in datasource is retrieved using various components of data provider and provide data to application and update the changes back to datasource. Components of Data Provider Connection Command Data adapter Data Reader.
Properties of Connection Object Connection String: It is a primary property. Connection String provides information like data provider, data source, database name, Integrated Security and so on. For instance, Provider=SqlOledb1;Data Source=MySqlServer;Initial Catalog= Northwind;Integrated Security=SSPI The Connection We can initialize database connection
141
Dispose - Releases the resources on the connection object. Used to force garbage collecting, ensuring no resources are being held after our connection is used. Incidentally, by using the Dispose method we automatically call the Close method as well.
State - Tells you what type of connection state your object is in, often used to check whether our connection is still using any resources. Ex. if (ConnectionObject.State == ConnectionState.Open). Property of Command Object Command Text Property: It possess Sql Statement or name of the Stored Procedure to perform the command. Command Type Property: It contains stored procedure setting or text setting Command Object provides several execute methods namely execute scalar, execute reader, execute nonquery method. Lets have a brief look into its function. Execute Scalar Performs query commands that return a single value. For example, counting the number of records in a table. Execute Non Query Executes commands that returns the number of rows affected by the command. Execute Reader It reads records sequentially from the database
142
The SqlDataReader Object Data Reader can be used to increase the performance of the application which works in read-only, forward-only mode. Many data operations require that we only get a stream of data for reading. The data reader object allows us to obtain the results of a SELECT statement from a command object. For performance reasons, the data returned from a data reader is a fast forward-only stream of data. This means that we can only pull the data from the stream in a sequential manner The DataReader cannot be created directly from code; they can create only by calling the ExecuteReader method of a Command Object.
The DataReader Object provides a connection oriented data access to the Data Sources. A Connection Object can contain only one DataReader at a time and the connection in the DataReader remains open, also it cannot be used for any other purpose while data is being accessed. When we started to read from a DataReader it should always be open and positioned prior to the first record. The Read() method in the DataReader is used to read the rows from DataReader and it always moves forward to a new valid row, if any row exist . Usually we are using two types of DataReader in ADO.NET. They are SqlDataReader and the OleDbDataReader. The System.Data.SqlClient and System.Data.OleDb are containing these DataReaders respectively();
When the ExecuteReader method in the SqlCommand Object execute, it will instantiate a SqlClient.SqlDataReader Object.SqlDataReader.Read()
The SqlDataAdapter Object Sometimes the data we work with is primarily read-only and we rarely need to make 143
changes to the underlying data source. Some situations also call for caching data in memory to minimize the number of database calls for data that does not change. The data adapter makes it easy..
Example This example demonstrates how to populate GridView with data from the database using the ADO.NET. STEP 1: Open WebConfig file and set up connection string like below: <connectionStrings> <add name="MyDBConnection" connectionString="Data Source=WPHVD1850229O0;Initial Catalog=Northwind;Integrated Security=SSPI;" providerName="System.Data.SqlClient"/> </connectionStrings> STEP 2: Create the GetConnectionString() method - Create a method for accessing our connection string that was set up at the WebConfig file private string GetConnectionString() { return System.Configuration.ConfigurationManager.ConnectionStrings["MyDBConnection"].Connection String; 144
Note: MyDBConnection is the name of the connectionstring that was set up in the webconfig. STEP 3: Setting up the GridView in the mark up (ASPX) - Grab a GridView from the Visual Studio ToolBox and then Set AutoGenerateColumns to False. - Add BoundField Columns in GridView and set the DataField and the HeaderText accordingly. See below:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns=""> <head runat="server"> <title>Populating GrindView Control</title> </head> <body> <form id="form1" runat="server"> <div> <asp:GridView <Columns> <asp:BoundField <asp:BoundField <asp:BoundField <asp:BoundField </Columns> </asp:GridView> </div> </form> </body> </html>
145
STEP 4: Create the BindGridView () method - After setting up GridView in the mark up then switch back to Code behind - Declare the following NameSpace below so that we can use the SqlClient built-in libraries Using System.Data.SqlClient; -Create the method for Binding the GridView
private void BindGridView() { DataTable dt = new DataTable(); SqlConnection connection = new SqlConnection(GetConnectionString()); try { connection.Open(); string sqlStatement = "SELECT * FROM Customers"; SqlCommand sqlCmd = new SqlCommand(sqlStatement, connection); SqlDataAdapter sqlDa = new SqlDataAdapter(sqlCmd); sqlDa.Fill(dt); if (dt.Rows.Count > 0) { GridView1.DataSource = dt; GridView1.DataBind(); } } catch (System.Data.SqlClient.SqlException ex) { string msg = "Fetch Error:"; msg += ex.Message; throw new Exception(msg); } finally { connection.Close(); } } 146
STEP 5: Calling the BindGridView() method on initial load of the page. protected void Page_Load(object sender, EventArgs e) { if (!Page.IsPostBack) { BindGridView(); } }
The ADO.NET Framework supports two models of Data Access Architecture, Connection Oriented Data Access Architecture and Disconnected Data Access Architecture. In Connection Oriented Data Access Architecture the applicationsmemory. DataSet ds = new DataSet ();
147. SqlDataAdapter adapter = new SqlDataAdapter("sql", "connection"); DataSet ds = new DataSet(); adapter.Fill(ds, "Src Table"); By keeping connections open for only a minimum period of time, ADO .NET conserves system resources and provides maximum security for databases and also has less impact on system performance.
Storing Binary Images, Files into SQL Server using ADO.Net MS SQL Servers provide a field data type named varbinary (MAX) to hold binary data. Well be using ASP.Net to upload a file using asp:FileUpload Control, store this uploaded file into a binary buffer and then insert this binary buffer data into the varbinary (MAX) field using simple ADO.Net way. 1. Create a database table named StoreImageFiles in SQL Server 2008 to hold the records for uploaded image files etc. Essentially, there would be four fields named ImageFileID, ImageFileSize, ImageFileMIMEType and ImageFileBinaryData. SQL Script :CREATE TABLE [dbo].[StoreImageFiles] ( [ImageFileID] [int] IDENTITY(1,1) NOT NULL, [ImageFileSize] [int] NOT NULL, [ImageFileMIMEType] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL, [ImageFileBinaryData] [varbinary](max) NOT NULL, CONSTRAINT [PK_StoredBinaryData] PRIMARY KEY CLUSTERED ( [ImageFileID] ASC)WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY] ) ON [PRIMARY]
ImageFileID: It's the primary key for the table StoreImageFiles ImageFileSize: It's the actual size of binary file to be stored in each record. The Size would be in bytes. Also, we'll be requiring this column in order to retrieve the binary image file. ImageFileMIMEType: Its' the type of the file being stored. Also, we'll be requiring this column 148
as well in order to retrieve the binary image file and ImageFileBinaryData: It's the actual binary data for the file (image etc.). 2. Create a new Asp.Net website named 'StoreImagesInDB', say. Place an asp:FileUpload and a button controls named 'UploadImageFile' and 'btnSaveImageFile' respectively on to web form. Below is the page markup:<%@ Page Language="C#" AutoEventWireup="true" CodeFile="StoringBinaryDataInSqlServer.aspx.cs" Inherits="StoringBinaryDataInSqlServer" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns=""> <head id="Head1" runat="server"> <title>Saving Image Files in SQL Server using ADO.Net</title> </head> <body> <form id="form1" runat="server"> <asp:FileUpload <asp:Button </form> </body> </html>
We'll simply upload the file to be stored using 'UploadImageFile' FileUpload control, get the size of the file in bytes, get the file type, read the file into a binary buffer and using standard ADO.Net save this binary stream from memory into database table. 3. Enter the code in code behind file as:using System; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Data.SqlClient; using System.IO; using System.Data;
149
public partial class StoringBinaryDataInSqlSqerver : System.Web.UI.Page { protected void btnSaveImageFile_Click(object sender, EventArgs e) { string imageToDbConStr = "Data Source=ADMIN-PC4;Initial Catalog=project1;User Id=sa;Password=sql@123;"; using (SqlConnection imageToDbConn = new SqlConnection(imageToDbConStr)) { int imageFileSize = UploadImageFile.PostedFile.ContentLength; string imageFileMIMEType = UploadImageFile.PostedFile.ContentType; BinaryReader imageFileBinaryReader = new BinaryReader(UploadImageFile.FileContent); byte[] imageFileBinaryBuffer = imageFileBinaryReader.ReadBytes(imageFileSize); imageFileBinaryReader.Close();
string imageToDbQueryStr = @"INSERT INTO StoreImageFiles(ImageFileSize, ImageFileMIMEType, ImageFileBinaryData) VALUES(@ImageFileSize, @ImageFileMIMEType, @ImageFileBinaryData)"; SqlCommand imageToDbCmd = new SqlCommand(imageToDbQueryStr, imageToDbConn);
imageToDbCmd.Parameters.Add(imageFileBinaryParam);
imageToDbConn.Open();
150
Compile and Run the application. The output window will have the File Upload Control with browse button. Here we have to browse the path of the image to be stored and click upload button. Then the image will be stored in the database file. Check the storage by SQL statement select * fromStoreImageFiles
Retrieve Binary Images, Files from DB & Display in Gridview TOP We can retrieve binary image file data, which is stored in SQL Server database table, using SqlDataReader (standard ADO.Net) and display each image record in asp:GridView 1. Creating a new Asp.Net Website named 'RetrievingImagesFromDB' 2. Place asp:GridView and asp:SqlDataSource on to the webform. Configure the datasource here. 3. Enter the following Markup code:<%@ Page Language="C#" AutoEventWireup="true" CodeFile= "RetrievingImagesFromDB.aspx.cs" Inherits= "RetrievingImagesFromDB" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "" > <html xmlns=""> <head id="Head1" runat="server"> <title>Retrieve Binary Image From SQL Server and Display in Gridview</title> </head> <body> <form id="form1" runat="server"> <asp:GridView <Columns> <asp:BoundField <asp:BoundField
151
<asp:ImageField</asp:ImageField> </Columns> </asp:GridView> <asp:SqlDataSource</asp:SqlDataSource> </form> </body> </html>
4.
We're having four columns in GridView as the table has four fields. First three of them are
regular asp:BoundFields and the forth one is asp:ImageField that'll be used to display the images in GridView. In image field we've DataImageUrlFormatString set to a URL. This URL can be to a regular .aspx webform. Here we are using a generic handler .ashx to avoid complete page processing. For each row in GridView we'll be passing value of 'ImageFileID' to this generic handler in querystring. And in response it will return an image. In the solution explorer, right click, Add New Item Generic Handler and enter the filename as DisplayImage.ashx and enter the following code:<%@ WebHandler Language="C#" Class="DisplayImage" %> using System; using System.Web; using System.Data; using System.Data.SqlClient; using System.IO;
152
string imageFromDbConStr = "Data Source=ADMIN-PC4;Initial Catalog=project1;User Id=sa;Password=sql@123;"; using (SqlConnection imageFromDbConn = new SqlConnection(imageFromDbConStr)) { string imageFromDbQueryStr = @"SELECT ImageFileSize, ImageFileMIMEType, ImageFileBinaryData FROM StoreImageFiles WHERE ImageFileID = @ImageFileID "; SqlCommand imageFromDbCmd = new SqlCommand(imageFromDbQueryStr, imageFromDbConn); imageFromDbCmd.Parameters.AddWithValue("@ImageFileID", imageFileID); imageFromDbConn.Open(); SqlDataReader imageFromDbReader = imageFromDbCmd.ExecuteReader(CommandBehavior.SingleRow); if (imageFromDbReader.HasRows) { imageFromDbReader.Read(); string imageFileMIMEType = imageFromDbReader["ImageFileMIMEType"].ToString(); string imageFileSize = imageFromDbReader["ImageFileSize"].ToString(); byte[] ImageFileBinaryData = (byte[])imageFromDbReader["ImageFileBinaryData"];
153
Compile and Run the application. Output will be ImageFileID ImageFileSize ImageFileMIMEType Image
2018
image/pjpeg
3049
Bulk copying of data from one data source to another data source is a new feature added to ADO.NET. A bulk copy class provides the fastest way to transfer set of data from one source to the 154
other. Each ADO.NET data provider provides bulk copy classes. For example, in SQL .NET data provider, the bulk copy operation is handled by SqlBulkCopy class; data from a data source can be copied to one of the four types - DataReader, DataSet, DataTable, or XML. Using bulk copy operation, cmd = new SqlCommand("SELECT * FROM Products", source); // Execute reader SqlDataReader reader = cmd.ExecuteReader(); Creating SqlBulkCopy Object In ADO.NET,, we need to set the DestinationTableName propety to the table, which we want to copy date to. // Create SqlBulkCopy SqlBulkCopy bulkData = new SqlBulkCopy (destination); // Set destination table name bulk ();
Example 1. Create a database project1, table emp under this database and insert some records. 2. Create another database test and create a table emp1. 3. Create a website and enter the following code in code file:using using using using using using using using System; System.Collections.Generic; System.Linq; System.Web; System.Web.UI; System.Web.UI.WebControls; System.Data; System.Data.SqlClient;
155
public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { /***** Bulk copy Order Details data from Northwind to Northwind2 ******/ string northwindConnectionString = "Data Source=ADMIN-PC4;Initial Catalog=project1;User Id=sa;Password=sql@123;"; string northwind2ConnectionString = "Data Source=ADMIN-PC4;Initial Catalog=test;User Id=sa;Password=sql@123;"; using (SqlConnection northwindConnection = new SqlConnection(northwindConnectionString)) database { northwindConnection.Open(); SqlCommand command = new SqlCommand("Select * from [emp]", northwindConnection); SqlDataReader dataReader = command.ExecuteReader(); using (SqlConnection northwind2Connection = new SqlConnection(northwind2ConnectionString)) { northwind2Connection.Open(); SqlBulkCopy bulkCopy = new SqlBulkCopy(northwind2Connection); bulkCopy.DestinationTableName = "[emp1]"; bulkCopy.WriteToServer(dataReader); northwind2Connection.Close(); } northwindConnection.Close();
4. Compile and Run the application. Output Bulk Copy Operation Successful
156
5. Check the bulk copy of records by using database test and sql statement select * from emp1 All records in emp (Database project1) will be copied into table emp1 (Database test). Scope of transaction includes activities such as retrieving data from SQLServer, reading message from the message queue & writing to the database. Database transaction: Invoking a stored procedure that wraps required operation with BEGIN TRANSACTION & COMMIT/ROLLBACK TRANSACTION gives you to run transaction in a single round trip to the database server. There are two types of transaction that you will deal with Local Implicit and Distributed Implicit. A local implicit transaction is the transaction that simply talks to only one database. We already saw many examples on local implicit transaction. A distributed transaction talks to multiple databases at the same time. Creating a transaction is easy, simply create a TransactionScope with a using statement and include your code to execute your sql statements. When the transaction is complete, tell the TransactionScope, that the commit is ready by setting the Consistent property to true. The TransactionScope class in the System.Transactions namespace enables developer to programmatically wrap a series of statements within the scope of a transaction and includes support for complex transactions that involve multiple sources, such as two different databases or even heterogeneous type of data stores such as Microsoft SQL server, and Oracle database & a web service. TransactionScope class uses Microsoft Distributed Transaction Coordinator (MSDTC) the configuration and implementation make it rather advance topic and beyond the scope of this tutorials.
157
Steps for TransactionScope: Reference to the system.Transactions assembly. Import the system.Transactions namespace in your application. If you want to specify nesting behavior for transaction declare TransactionScopeOption variable and assign it a suitable value. If you want to specify the isolation level or timeout for the transaction, create a TransactionOptions instance and set its isolation level and TimeOut properties. Create new TransactionScope Object. Pass a TransactionScope object variable and TransactionOptions object into the constructor, if appropriate. Open connections to each database that you need to update, and perform update operations as required by application, if all updates succeed, call the Complete method on the TransactionScope object close the using block.
Using (TransactionScope transScope = new TransactionScope()) { string conString1 = WebConfigurationManager.ConnectionString [DB1].ConnectionString; string conString2 = WebConfigurationManager.ConnectionString[DB2].ConnectionString; using(SqlConnection Sqlcon = new SqlConnection(conString1)) { SqlCommand SqlCmd = Sqlcon.CreateCommand(); cmd.CommandText = "INSERT INTO asps(emp, name) Values (2,'Electronics')"; SqlCon.Open(); SqlComd.ExecuteNonQuery(); SqlCon.Close(); } using SqlConnection SqlCon1 = new SqlConnection(conString2); {
158
SqlCommand SqlCmd2 = SqlCon1.CreateCommand(); SqlCmd2.CommandText = "INSERT INTO ds(names) Values ('java')"; SqlCon2.Open(); SqlCmd2.ExecuteNonQuery(); SqlCon2.Close(); } transScope.Complete(); }
.43 CachingTOP clients. 159
ASP.NET Supports Three Type of Caching Page Output caching [Output caching] Fragment caching [Output caching ] Data Caching
Page Output Caching Before starting Page Output caching we need to know the compilation process of a page, because based on the generation of page we should able to understand why should we use caching . ASPX Page compiles. Now whatever we are getting , if there is some page which change frequently JIT need compile every time. So, We can use Page output for those page which content are relatively static. So rather than generating the page on each user request we can cache the page using Page output caching so that it can be access from cache itself. So, Instead of pages can be generated once and then cached for subsequent fetches, Page output caching allows the entire content of a given page to be stored in the cache. When the first request is generated page has been cached and for same page request page should be retrieve from cache itself rather than regenerating the page. For Output caching , OutputCache directive can be added to any ASP.NET page, specifying the duration (in seconds) that the page should be cached. The code file is :using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { Response.Cache.SetExpires(DateTime.Now.AddSeconds(360));
160
. Source file:<%@ Page Language="C#" AutoEventWireup="true" Inherits="_Default" %> CodeFile="Default.aspx.cs"
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns=""> <head runat="server"> <title>Untitled Page</title> </head> <body> <form id="form1" runat="server"> <div> <script runat="server"> protected void Page_Load(Object sender, EventArgs e) { lbl_msg.Text = DateTime.Now.ToString(); } </script> <h3>Output Cache example</h3> <p>Page generated on: <asp:label</p>
161
There will be situations where we would like to cache only sections of page rather than the whole page. There may be sections of the page where the information gets updated every second or so and these sections should not cached. This can be implemented in ASP.NET using Fragment Caching. Fragment Caching is the technique by which we can cache a section of the page instead of caching the whole page
Source (Default.aspx)<%@ Page<br> <br> <br> This page was most recently generated at:<p> <% DateTime t = DateTime.Now; Response.Write(t); %>
Output This user control was most recently generated at: 3/7/2011 5:25:27 PM 162. Create an XML file as mydata.xml<?xml version="1.0" encoding="utf-8" ?> <bookstore> <genre name="Fiction"> <book ISBN="10-861003-324" Title="title 1" Price="19.99"> <chapter num="1" name="Introduction"> A </chapter> <chapter num="2" name="Body"> B </chapter> <chapter num="3" name="Conclusion"> C </chapter> </book> <book ISBN="1-861001-57-5" Title="title " Price="24.95"> <chapter num="1" name="Introduction"> D </chapter> <chapter num="2" name="Body"> E </chapter> <chapter num="3" name="Conclusion"> F </chapter> </book> </genre> <genre name="NonFiction"> <book ISBN="10-861003-324" Title="title 2" Price="19.99"> <chapter num="1" name="Introduction"> G </chapter> <chapter num="2" name="Body"> H
163
</chapter> <chapter num="3" name="Conclusion"> I </chapter> </book> <book ISBN="1-861001-57-6" Title="title 3" Price="27.95"> <chapter num="1" name="Introduction"> J </chapter> <chapter num="2" name="Body"> K </chapter> <chapter num="3" name="Conclusion"> L </chapter> </book> </genre> </bookstore>
Source code:<%@ Page void Page_Load(object sender, EventArgs e) { lblCurrentTime.Text = "Current Time is : " + DateTime.Now.ToLongTimeString(); } </script>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns=""> <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div> <asp:Label</asp:Label> <asp:GridView <Columns> <asp:BoundField</asp:BoundField> <asp:BoundField</asp:BoundField> <asp:BoundField</asp:BoundField> </Columns>
164
</asp:GridView> <asp:XmlDataSource </asp:XmlDataSource> </div> </form> </body> </html>
165
Much more than documents.
Discover everything Scribd has to offer, including books and audiobooks from major publishers.Cancel anytime.
|
https://www.scribd.com/document/77748002/shylaja
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
Python - Extracting and Saving Video Frames
So I've followed this tutorial but this doesn't seem to do anything. Simply nothing. It waits a few seconds and closes the program. What is wrong with this code?
import cv2 vidcap = cv2.VideoCapture('Compton.mp4') success,image = vidcap.read() count = 0 success = True while success: success,image = vidcap.read() cv2.imwrite("frame%d.jpg" % count, image) # save frame as JPEG file if cv2.waitKey(10) == 27: # exit if Escape is hit break count += 1
Also, in the comments it says that this limits the frames to 1000? Why?
EDIT: I tried doing
success = True first but that didn't help. It only created one image that was 0 bytes.
So here was the final code that worked:
import cv2 print(cv2.__version__) vidcap = cv2.VideoCapture('big_buck_bunny_720p_5mb.mp4') success,image = vidcap.read() count = 0 success = True while success: cv2.imwrite("frame%d.jpg" % count, image) # save frame as JPEG file success,image = vidcap.read() print 'Read a new frame: ', success count += 1
So for this to work, you'll have to get some stuff. First, download OpenCV2. Then install this for Python 2.7.x. Go to the ffmpeg folder inside the 3rd party folder (something like
C:\OpenCV\3rdparty\ffmpeg, but i'm not sure). Copy
opencv_ffmpeg.dll (or the x64 version if your python version is x64) and paste it into your Python folder (probably
C:\Python27). Rename it
opencv_ffmpeg300.dll if your opencv version is 3.0.0 (you can find it here), and change accordingly to your version. Btw, you must have your python folder in your environment path.
From: stackoverflow.com/q/33311153
|
https://python-decompiler.com/article/2015-10/python-extracting-and-saving-video-frames
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
This chapter describes the steps required to develop a basic JMS application in C# using the JMS .NET API. The process for developing a JMS application using the WebLogic JMS .NET client is very similar to the process used to develop a Java client.
Creating a JMS .NET Client Application
Example: Writing a Basic PTP JMS .NET Client Application
Using Advanced Concepts in JMS .NET Client Applications
The following flowchart illustrates the steps in a basic JMS .NET application.
Figure 3-1 Basic Steps in a JMS .NET Client Application
Note:Creating and closing resources has relatively higher overhead in comparison to sending and receiving messages. Oracle recommends that contexts be shared between threads, and that other resources be cached for reuse. For more information, see Best Practices.
The following example shows how to create a basic PTP JMS .NET client application, written in C#. It uses synchronous receive on a queue configured using auto acknowledge mode. A complete copy of the example is provided in Appendix A, "JMS .NET Client Sample Application."
For more information about the .NET API classes and methods used in this example, see Understanding the WebLogic JMS .NET API, or the WebLogic Messaging API Reference for .NET Clients documentation.
Before proceeding, ensure that the system administrator responsible for configuring WebLogic Server has configured the following:
Listen port configured for T3 protocol on the server hosting the JMS .NET client host. For more information, see Configuring the Listen Port.
The required JMS resources, including the connection factories, JMS servers, and destinations. For more information, see Configuring JMS Resources for the JMS .NET Client.
The following steps assume you have defined the required variables, including the WebLogic Server host, the connection factory, and the queue and topic names at the beginning of your program.
using System; using System.Collections; using System.Collections.Generic; using System.Threading; using WebLogic.Messaging; public class MessagingSample { private string host = "localhost"; private int port = 7001; private string cfName = "weblogic.jms.ConnectionFactory"; private string queueName = "jms.queue.TestQueue1";
Create a context to establish a network connection to the WebLogic Server host and optionally login.
IDictionary<string, Object> paramMap = new Dictionary<string, Object>(); paramMap[Constants.Context.PROVIDER_URL] = "t3://" + this.host + ":" + this.port; IContext context = ContextFactory.CreateContext(paramMap);
Note:The Provider_URL may contain multiple addresses, separated by commas. For details about specifying multiple addresses, see Specifying the URL Format.
When multiple addresses are specified, the context tries each address in turn until one succeeds or they all fail, starting at a random location within the list of addresses, and rotating through all addresses. Starting at a random location facilitates load balancing of multiple clients, as different client contexts will randomly load balance their network connection to different .NET client host servers.
Note:You also have the option of supplying a username and password with the initial context, as follows:
paramMap[Constants.Context.SECURITY_PRINCIPAL] = username; paramMap[Constants.Context.SECURITY_CREDENTIALS] = password;
Look up the JMS connection factory.
IConnectionFactory cf = context.LookupConnectionFactory(this.cfName);
Look up JMS destination resources in the context using their configured JNDI names.
IQueue queue = (IQueue)context.LookupDestination(this.queueName);
Create a connection using the connection factory. This establishes a JMS connection from the .NET client host to the JMS connection host. The connection host will be one of the servers that is in the configured target list for the connection factory, and which can be the same as the .NET client host.
IConnection connection = cf.CreateConnection();
Start the connection to allow consumers to get messages.
connection.Start();
Create a session using the
AUTO_ACKNOWLEDGE acknowledge mode.
Note:Sessions are not thread safe. Use multiple sessions if you need to run producers and/or consumers concurrently. For an example using multiple sessions, see the asynchronous example in Appendix A, "JMS .NET Client Sample Application."
ISession session = connection.CreateSession( Constants.SessionMode.AUTO_ACKNOWLEDGE);
Create a message producer and send a persistent message.
IMessageProducer producer = session.CreateProducer(queue); producer.DeliveryMode = Constants.DeliveryMode.PERSISTENT; ITextMessage sendMessage = session.CreateTextMessage("My q message"); producer.Send(sendMessage);
Create a message consumer and receive a message. Note that the message is automatically deleted from the server because the session was created in
AUTO_ACKNOWLEDGE mode, as shown in Step 6.
IMessageConsumer consumer = session.CreateConsumer(queue); IMessage recvMessage = consumer.Receive(500);
Close the connection. Note that closing a connection also closes its child sessions, consumers, and producers.
connection.Close();
Appendix A, "JMS .NET Client Sample Application," provides a complete example of a JMS .NET client application, written in C#, that demonstrates some of the following advanced concepts:
The use of local transactions instead of acknowledge modes.
Message persistence. For more information, see Persistent vs. Non-Persistent Messages in Programming JMS for Oracle WebLogic Server.
Acknowledge modes. For more information, see Non-Transacted Session in Programming JMS for Oracle WebLogic Server.
Exception listeners. For more information, see Best Practices.
Durable Subscriptions. For more information, see Setting Up Durable Subscriptions in Programming JMS for Oracle WebLogic Server.
For guidelines in the use of other advanced concepts in the JMS .NET client such as interoperability, security, and best practices, see Chapter 4, "Programming Considerations."
|
https://docs.oracle.com/middleware/11119/wls/JMSDN/develop.htm
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
Owner: troutbird | 0.1 | Mar 31, 2016 | Package | Issues | Source | License: Apache-2.0
dependencies { compile 'org.grails.plugins:multitenant:0.1' }
TOC
grails-multitenant-plugin
This plugin is sponsored by IP Geolocation. It adds multitenant support for grails 3 applications based on hibernate filters. Tenants are resolved using spring security.
Note
Only branch grails3.0.xhibernate4 is working. Rest of branches are for GORM5 which has broken the way hibernate filters are applied. So they won't work. Also, this plugin works only for Grails 3.0 and 3.1. For later versions of grails, use GORM's built in multi-tenancy support. The work on this plugin has been stopped in favor of Grails internal support for Multi-tenant architecture.
Installation
Add following dependency in build.gradle
compile 'org.grails.plugins:multitenant:0.1'
Add configClass attribute in application.yml under dataSource section like this:
configClass: org.grails.plugin.multitenant.HibernateMultitenantConfiguration
Architecture
This plugin uses single database single schema differentiator based technique to identify tenants.
Resolving Tenant
Currently it resolves tenant using spring security. So you have to edit spring security user domain class to implement TenantIdentifier trait like this:
class User implements Serializable, TenantIdentifier
It add userTenantId property to User domain class and injects two closures dynamically to this domain
withTenantId
This closure executes a particular code inside its scope with a tenantId supplied as parameters even if the logged in user does not belong to that tenant. You can only execute idempotent code inside this block. If your code is query, or some other read only operation, it will execute that with supplied tenantId. If your code is going to change something in database, it will use tenantId of logged in user. Be careful . .
User.withTenantId(12){ // Your code goes here }
withoutTenantId
As the name states, you can bypass tenantId filter temporarily to do operations not specific to any tenant.
User.withoutTenantId(){ // You code goes here
The code in this scope should be read only as is the case with withTenantId method above.
Multitenat Domain Classes
You have to implement Multitenant trait in all domain classes you want to be multitenant.
class Book implement Multitenant
This will add a property tenantId to domain class and three methods as below:
Long tenantId def beforeInsert() { if(tenantId == null){ tenantId = tenantResolverService.resolveTenant() } } def beforeValidate() { if(tenantId == null){ tenantId = tenantResolverService.resolveTenant() } } def beforeUpdate() { if(tenantId == null){ tenantId = tenantResolverService.resolveTenant() } }
So if you want to use these methods in any of multitenant domain class, you have to reproduce above code along with your own implementation as yours will overwrite these methods.
TenantResolverService
This plugin provide TenantResolverService to your application which can be injected anywhere just like normal grails services. It provides only one method resolveTenant which provides tenantId of current user. Multitenant plugin makes extensive use of this service at various places inside the code. Multitenant filter intercepts controller actions only. If you want to do some multitenant stuff inside a service then you should call that service from controller or use withTenantId as below:
def tenantResolverService User.withTenantId(tenantResolverService.resolveTenant()){ // your code here }
About ipgeolocation.io
IP Geolocation's IP intelligence APIs help developer's find out Geolocation, Tim Zone, Local Currency and much more from just an IP address. For more information checkout document page
|
https://plugins.grails.org/plugin/troutbird/multitenant
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
f descriptor.
RETURN VALUE
Upon successful completion 0 is returned. Otherwise, EOF is returned and(),
write() or fflush().
CONFORMING TO
C89, C99.
NOTES
Note that fclose() only flushes the user space buffers provided by the C library. To ensure that the data is physically stored on disk the kernel buffers must be flushed too, for example, with sync() or fsync().
From Linux Programmer’s Manual
EXAMPLE
#include <stdio.h> int main(int argc, char* args[]) { FILE * fp; pFile = fopen ("file_to_write.txt","wt"); fprintf (fp, "fclose example"); fclose (fp); return 0; }
|
https://www.systutorials.com/2775/fclose-close-a-stream/
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
in reply to Re^2: Why no comments?
in thread Why no comments? looked at this source to discover why Perl decimal to binary conversion is not correctly rounded. To cut a long story short, Perl_my_atof2() is a home-brew conversion routine, which makes a mess of some cases.
I wondered why the code doesn't use atof(), and therefore depend on the library writer's skill. There is no commentary to help. I understand, from responses to the bug report I submitted, that the code used to use atof() until its semantics changed in C99 ! Quite a lot of work went into Perl_my_atof2() -- I would have commented on why, at least.
The solution to the problem is to discard all the arithmetic in Perl_my_atof2() and replace it by something which checks that the incoming decimal number string is a valid Perl number, and feed the (possibly modified) string to the real atof(). I would happily bang out a patch to do that... except that it's struggle to work out what the code is expected to do, because there isn't anything at all to say why it is the way it is.
The issue is complicated by the problem of Locale. This is the code that uses Perl_my_atof2() -- I promise you, I have not removed any comments !
For me there are two issues here:For me there are two issues here:NV Perl_my_atof(pTHX_ const char* s) { NV x = 0.0; #ifdef USE_LOCALE_NUMERIC dVAR; if (PL_numeric_local && IN_LOCALE) { NV y; /* Scan the number twice; once using locale and once without +; * choose the larger result (in absolute value). */ Perl_atof2(s, x); SET_NUMERIC_STANDARD(); Perl_atof2(s, y); SET_NUMERIC_LOCAL(); if ((y < 0.0 && y < x) || (y > 0.0 && y > x)) return y; } else Perl_atof2(s, x); #else Perl_atof2(s, x); #endif return x; }
the key question is, why does it do Perl_my_atof2() twice ?
Perl_my_atof2() is not a cheap operation. Who knows what it costs to switch the numeric locale. So it looks as though there must be some important semantic requirement here. I would have thought that merited some comment ?
Suppose Perl_my_atof is given the string '123,456.789' -- if the locale has a ',' decimal point, then the answer is 123point456, otherwise it's 123 (duh). I'm missing something, I expect, but I cannot see why two passes are required or what problem it is supposed to solve. The author must have had to think this through... it would not have cost much to leave some notes for posterity ?
Perl_my_atof2() itself worries about locale, and will (as far as I can see) accept either the current locale decimal point or '.'. I have a feeling that this means that the two passes in Perl_my_atof() are unnecessary -- if only I could be sure what they were for. If so:
I observe that there is no commentary to describe the acceptable form of numbers, so perhaps whoever wrote the two passes didn't fully understand what Perl_my_atof2() does ?
or, perhaps more likely, the two passes in Perl_my_atof() were required before Perl_my_atof2() was introduced, but whoever wrote Perl_my_atof2() couldn't see why either, and hence couldn't see that its locale handling is either redundant or makes the two passes redundant.
A relatively small amount of "why" commentary would help a lot now. Who knows, a little bit of more general commentary might have helped in the past.
sadly, the comment there is, is useless -- it tells you no more than what the code tells you.
Obviously I could settle down and run some test cases, and reverse engineer the requirements. As it happens, I'm not that familiar with the effects of locale switching, which adds to the problem. Just a few comments would do to equip me with the information required to fix the bug...
...further, if I'm right about the two passes in Perl_my_atof being redundant -- a few comments might have helped others too !
|
https://www.perlmonks.org/?displaytype=print;replies=1;node_id=740582
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
Manages a TransformFeedback buffer. More...
#include <vtkTransformFeedback.h>
Manages a TransformFeedback buffer.
OpenGL's TransformFeedback allows varying attributes from a vertex/geometry shader to be captured into a buffer for later processing. This is used in VTK to capture vertex information during GL2PS export when using the OpenGL2 backend as a replacement for the deprecated OpenGL feedback buffer.
Definition at line 40 of file vtkTransformFeedback.h.
Definition at line 44 of file vtkTransformFeedback.h.
The role a captured varying fills.
Useful for parsing later.
Definition at line 50 of file vtkTransformFeedback.h.
Return 1 if this class is the same type of (or a subclass of) the named class.
Returns 0 otherwise. This method works in combination with vtkTypeMacro found in vtkSetGet.h.
Reimplemented from vtkObjectBase.
Clear the list of varying attributes to capture.
Capture the varying 'var' with the indicated role.
Get the list of captured varyings,.
Definition at line 79 of file vtkTransformFeedback.h.
Returns the number of data elements each vertex requires for a given role.
Definition at line 217 of file vtkTransformFeedback.h.
Returns the number of bytes per vertexs, accounting for all roles.
The number of vertices expected to be captured.
If the drawMode setter is used, PrimitiveMode will also be set appropriately. For the single argument version set function, set the exact number of vertices expected to be emitted, accounting for primitive expansion (e.g. triangle strips -> triangle strips). The two argument setter is for convenience. Given the number of vertices used as input to a draw command and the draw mode, it will calculate the total number of vertices.
The size (in bytes) of the capture buffer.
Available after adding all Varyings and setting NumberOfVertices.
GL_SEPARATE_ATTRIBS is not supported yet.
The bufferMode argument to glTransformFeedbackVaryings. Must be GL_INTERLEAVED_ATTRIBS or GL_SEPARATE_ATTRIBS. Default is interleaved. Must be set prior to calling BindVaryings. vtkSetMacro(BufferMode, int) vtkGetMacro(BufferMode, int) Call glTransformFeedbackVaryings(). Must be called after the shaders are attached to prog, but before the program is linked.
Get the transform buffer object.
Only valid after calling BindBuffer.
Get the transform buffer object handle.
Only valid after calling BindBuffer.
The type of primitive to capture.
Must be one of GL_POINTS, GL_LINES, or GL_TRIANGLES. Default is GL_POINTS. Must be set prior to calling BindBuffer.
Generates and allocates the transform feedback buffers.
Must be called before BindBuffer. This releases old buffers. nbBuffers is the number of buffers to allocate. size if the size in byte to allocate per buffer. hint is the type of buffer (for example, GL_DYNAMIC_COPY)
Binds the feedback buffer, then call glBeginTransformFeedback with the specified PrimitiveMode.
Must be called after BindVaryings and before any relevant glDraw commands. If allocateOneBuffer is true, allocates 1 buffer (used for retro compatibility).
Calls glEndTransformFeedback(), flushes the OpenGL command stream, and reads the transform feedback buffer into BufferData.
Must be called after any relevant glDraw commands. If index is positive, data of specified buffer is copied to BufferData.
Get the transform buffer data as a void pointer.
Only valid after calling ReadBuffer.
Release any graphics resources used by this object.
Release the memory used by the buffer data.
If freeBuffer == true (default), the data is deleted. If false, the caller is responsible for deleting the BufferData with delete[].
|
https://vtk.org/doc/nightly/html/classvtkTransformFeedback.html
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
I have set up my light box to load on first page the viewer see's. The registration button, email and password boxes that I have setup is registering new members, but the ".then" function is not redirecting. I had to do a direct link in the button settings to get it to go to the page I wanted.
What I am trying to accomplish is, I would like to have the light box only pop up once per day, per visitor (Non-members). If they are members, I would prefer that the light box not pop up everyday, if the members account is stays logged in.
I also tried to have the ".then" go to the users profile after registering or logging in.
It goes to a "404" page wixLocation.to(' /profile ');
This is the coded I have set now,
import wixUsers from 'wix-users'; import wixLocation from 'wix-location';
$w.onReady(function (){ $w('#registerNow').onClick(function (){ let email = $w('#registerEmail').value; let password = $w('#registerPassword').value; wixUsers.register(email, password) .then(()=>{ wixLocation.to(''); }); }); });
export function registerNow_click(event) { wixLocation.to(''); }
Any support would be greatly appreciated, wix support contact referred me here.
My name is Chris and I am the owner of
Hi, First, if you wish the lightbox replace the Wix Log-In built in system you need to edit your custom signup.
As a result, if a visitor try to get to a member page only the lightbox will automatically pup up.
*pay attention to cancel the automatically display.
View this ticket in order to learn how to edit your custom signup system.
Second, View this code, I believe it will solve the problem:
Best of luck!
Sapir
You might want to add wix window to above to get it to close lightbox after user registers themselves, although that is probably already covered on the page that Sapir has linked for you.
Below is the code for my own custom register login lightbox, works perfectly and closes after registering details before moving them onto a sign up status page, then both names will be saved in contacts and once site member is approved the member details will be added to my 'members' database.
Change the URL ending to whatever page you want to send the user to after they log in, as in your case it would be: wixLocation.to("/account/my-account");
|
https://www.wix.com/corvid/forum/community-discussion/light-box-for-registration
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
How we can get order details (Id, number, totals etc) & promo code of current order on Thank You Page? I don't know any direct way to get these both on the thank you page code.
Currently I'm getting order details by adding then in new database (named OSI) using onNewOrder() at events.js (under Backend menu) as follow
import wixData from 'wix-data'; export function wixStores_onNewOrder(event) { let newOrderId = event.orderId; let newOrderNumber = event.number; let totalPrice = event.totals.subtotal - event.totals.discount let toInsert = { "title": newOrderId, "order_total": totalPrice, "order_number": newOrderNumber }; let wdi = wixData.insert("OSI", toInsert) .then( (results) => { } ) .catch( (err) => { } ); }
OSI database structure (imported from CSV file) is as follow
"Title","ID","Owner","Created Date","Updated Date","order_total","order_number" "xyz","b763d84d-e1c0-4839-a952-88543e3c5669","affb50c0-5a78-4c4d-9e88-1e4d4e4dd37e",2019-05-07T19:20:23Z,2019-05-21T15:25:05Z,789,123 "abc","8202ec1d-359f-47e0-88ce-2e77b19e2c87","affb50c0-5a78-4c4d-9e88-1e4d4e4dd37e",2019-05-07T19:20:16Z,2019-05-21T15:25:13Z,123,789
And reading these details on thank you page as follow (I have iframe control name osi on page)
import wixData from 'wix-data'; import wixLocation from 'wix-location'; $w.onReady(function () { let path = wixLocation.path; let wdq = wixData.query("OSI") .find() .then( (results) => { if (results.items.length > 0) { for (var v = 0; v < results.length; v++) { var item = results.items[v]; if (item.title === path[1]) { $w('#osi').src = ":"+item.order_total+"/transaction:" + item.order_number; wixData.remove("OSI", item._id) .then( (results_del) => { } ) .catch( (err_del) => { } ); break; } } } } ) .catch( (err) => { } ); } );
We can get promo code details on onCartCompleted() at events.js (under Backend menu). I have stored these info in my OSI database. Plan was to store promo code info in onCartCompleted() & then store order info on onNewOrder(). event.buyerInfo.id (which is available in both events) was used as ID to store both info in the same OSI database record. The issue I'm facing is that both events seems to be executing on the same time, so it adds record with partial info. Isn't it supposed to be executed in following sequence; 1st onCartCompleted(), 2nd onNewOrder()?
I know we can improve above mentioned code but I'm not interested in it for now. Please suggest me any workable solution for problem mentioned above. How we can get Order Details & Promo Code on Thank You Page?
@givemeawhisky I see there are order details but I don't see any details for promo code! Can you please point out this info? Or any other link?
@aamir.shahzad.ropstam
If you want to show coupons etc, then look at using Wix Stores Backend instead.
@givemeawhisky It seems that there's no direct way to get current order's coupon code on thank you page. So I'm going to store coupon codes in the DB as I have mentioned in my 1st comment and read order details on thank you page as you have mentioned in the link (few comments above). I'm going to compare DB record data with order data on thank you page with event.buyerInfo.id and get coupon codes data. Let me know if you think there's any other better way.
Any suggestion/solution/link?
|
https://www.wix.com/corvid/forum/community-discussion/order-details-promo-code-on-thank-you-page
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
This page contains an archived post to the Java Answers Forum made prior to February 25, 2002.
If you wish to participate in discussions, please visit the new
Artima Forums.
problem in drawing line
Posted by Theesan on July 07, 2001 at 10:42 AM
I have a problem in drawng graphis.Here I gave a program to draw some lines .But the program only draws the last line.In C or other languages we can all a funtion directly to draw lines from any loop. But it is not possible in java.I know that storing all lines in an array we can draw all lines. But I want to know how to draw the lines directly from the loop withoutan array.If you know the answer please let it me to know.ThanxHere is the program.import java.awt.*;public class Test extends Canvas { Frame F = new Frame(); int x1,x2,y1,y2; public Test() { F.setSize(300,300); F.setVisible(true); setSize(200,200); setVisible(true); F.add(this); } public void aa() { x1=0; x2=100; for (int i=0;i<100;i=i+100) { y1=i; y2=i; repaint(); } } public void paint(Graphics g) { g.drawLine(x1,y1,x2,y2); } public void update(Graphics g) { paint(g); } public static void main (String[] args) { Test t= new Test(); t.aa(); } }
|
http://www.artima.com/legacy/answers/Jul2001/messages/51.html
|
CC-MAIN-2017-30
|
en
|
refinedweb
|
In this blog post, I’ll go through the steps of implementing the Share feature in your Windows Store App. Sharing data can be bidirectional, your App can either accept data from another App and/or your App can share data to another App. In this article, I’ll discuss the latter scenario in which you share data with another App and your App acts as a source of information for sharing. I’ll be using the example of TFS Dashboard App in which I connect to OData Service for Team Foundation Server to retrieve data from TFServer. The source code of this App is available for download.
Consider a scenario where you want to share WorkItems from a Windows Store App with a team member. In this App, you can select the WorkItems that you wish to share.
And bring up the Charms bar (Windows key + C or swipe from the right) and hit the Share button.
Then from a list of Apps that accept the data format that you are sharing will be available as share targets.
For this example, let’s select the Mail App, and send the WorkItems to my colleague:
Now let’s implement share in our Windows Store App.
Step 1: Add the event “OnDataRequested” towards the end of the LoadState() method of the ItemDetailPage.xaml.cs or the page you want to share the data from.
DataTransferManager.GetForCurrentView().DataRequested += OnDataRequested;
DataRequested event is part of Windows.ApplicationModel.DataTransfer namespace. Hence add the following using statement on the page.
using Windows.ApplicationModel.DataTransfer;
and for StringBuilder class add the namespace:
using System.Text;
Step2: When we bring up the Charms bar and hit the Share button then the OnDataRequested event is called for that App.
We need to handle the OnDataRequested event. Add this event handler to your page, in my case it is ItemDetailPage.xaml.cs.
C#
void OnDataRequested(DataTransferManager sender, DataRequestedEventArgs args)
{
var request = args.Request;
StringBuilder html = new StringBuilder();
if (this.itemGridView.SelectedItems.Count != 0)
{
//if the selected items count is greater than 1 then create an html ordered list – numbered list
if (this.itemGridView.SelectedItems.Count > 1)
{
var items = this.itemGridView.SelectedItems;
html.Append("<ol>");
foreach(WorkItem item in items)
{
html.Append("<li>/></li>");
}
html.Append("</ol>");
}
else
{
var item = (WorkItem)this.itemGridView.SelectedItem;
html.Append("/>");
}
string data = Windows.ApplicationModel.DataTransfer.HtmlFormatHelper.CreateHtmlFormat(html.ToString());
request.Data.SetHtmlFormat(data);
}
request.Data.Properties.Title = "Workitem/s from project " + SampleDataSource.ProjectName;
request.Data.Properties.Description = "Checkout the workitem/s!!";
}
In the above method, I go through the items that are selected i.e. WorkItems on the ItemDetailPage.xaml.cs page. I am selecting certain properties of the selected WorkItems and creating an HTML formatted text for sharing data. CreateHtmlFormat() is a method that checks your formatting and adds necessary headers.
Step 3: Detach the event handler in the SaveState method
// Deregister the DataRequested event handler
DataTransferManager.GetForCurrentView().DataRequested -= OnDataRequested;
That’s it, we are done and ready to share data from our App.
For another sample example of implementing share by using Windows Store App as source of data visit:.
For Share API reference, visit:.
Very nice example! Do you know if it is possible to append css styles to the html?
|
https://blogs.msdn.microsoft.com/nishasingh/2013/01/16/sharing-data-from-a-windows-store-app-using-winrt-api/
|
CC-MAIN-2017-30
|
en
|
refinedweb
|
This is the mail archive of the cygwin mailing list for the Cygwin project.
On Oct 12 11:09, patrick ficheux wrote: > In SANE (scanner project), the backend for snapscan failed to call shmget() > with error EACCES (Permission denied) if the current user isn't > administrator. > When I'am logged as windows administrator, shmget() is called successfully > > In both case, the env. variable CYGWIN exists and this value is > CYGWIN=server > cygserver is installed and runs > > > Is it possible to call shmget() without administrator's privileges ? > In this case, what kind of privileges a user must have ? and how to set > those privileges ? > > Thanks > > extract from snapscan backend > > #ifndef SHM_R > #define SHM_R 0 > #endif > > #ifndef SHM_W > #define SHM_W 0 > #endif ^^^^^^^^^^^^^^^^^^^^ This is the problem. SHM_R and SHM_W are not defined on Cygwin. These flags are not defined by POSIX and relying on them as above is non-portable. As a result, you create a shared mem region with permission bits set to 000. > int shm_id = shmget (IPC_PRIVATE, shm_size, IPC_CREAT | SHM_R | SHM_W); Try something like #include <sys/stat.h> int shm_id = shmget (IPC_PRIVATE, shm_size, IPC_CREAT | S_IRWXU); instead. Corinna -- Corinna Vinschen Please, send mails regarding Cygwin to Cygwin Project Co-Leader cygwin AT cygwin DOT com Red Hat -- Unsubscribe info: Problem reports: Documentation: FAQ:
|
http://cygwin.com/ml/cygwin/2007-10/msg00284.html
|
CC-MAIN-2017-30
|
en
|
refinedweb
|
1 package org.mr.kernel.dmf;2 3 4 import java.util.HashMap ;5 6 import org.mr.MantaAgent;7 import org.mr.core.util.exceptions.CreationException;8 import org.mr.core.util.patterns.flow.ConfigurationLoader;9 import org.mr.core.util.patterns.flow.Declaration;10 import org.mr.core.util.patterns.flow.FlowFramework;11 12 /*13 * Copyright 2002 by14 * <a HREF="">Coridan</a>15 * <a HREF="mailto: support@coridan.com ">support@coridan.com</a>16 *17 * The contents of this file are subject to the Mozilla Public License Version18 * 1.1 (the "License"); you may not use this file except in compliance with the19 * License. You may obtain a copy of the License at20 * *22 * Software distributed under the License is distributed on an "AS IS" basis,23 * WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License24 * for the specific language governing rights and limitations under the25 * License.26 *27 * The Original Code is "MantaRay" (TM).28 *29 * The Initial Developer of the Original Code is Coridan.30 * Portions created by the Initial Developer are Copyright (C) 200631 * Coridan Inc. All Rights Reserved.32 *33 * Contributor(s): all the names of the contributors are added in the source34 * code where applicable.35 *36 * Alternatively, the contents of this file may be used under the terms of the37 * LGPL license (the "GNU LESSER GENERAL PUBLIC LICENSE"), in which case the38 * provisions of LGPL are applicable instead of those above. If you wish to39 * allow use of your version of this file only under the terms of the LGPL40 * License and not to allow others to use your version of this file under41 * the MPL, indicate your decision by deleting the provisions above and42 * replace them with the notice and other provisions required by the LGPL.43 * If you do not delete the provisions above, a recipient may use your version44 * of this file under either the MPL or the GNU LESSER GENERAL PUBLIC LICENSE.45 46 *47 * This library is free software; you can redistribute it and/or modify it48 * under the terms of the MPL as stated above or under the terms of the GNU49 * Lesser General Public License as published by the Free Software Foundation;50 * either version 2.1 of the License, or any later version.51 *52 * This library is distributed in the hope that it will be useful, but WITHOUT53 * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or54 * FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public55 * License for more details.56 */57 58 /**59 * User: Moti Tal60 * Date: Mar 13, 200561 * Time: 3:26:48 PM62 *63 * This class bootstraps the DMF framework, by parsing the configuration file and setting the flow framework.64 * The configuration file should be could dataManipulationFramework and place under the manta config directory.65 *66 * e.g example of a single step in the dmf flow framework.67 * An empty step form attribute stand for entry point to the flow.68 * This condition uses the default StringCondition and check if the message is in the way in; If met condition the69 * plugin will be executed otherwise ignore.70 * The incoming attribute set automatically by the DMF framework.71 * <dmf>72 * <plugins>73 * <plugin name="transformer" class="org.mr.mt.MantaMessageTransformer"/>74 * </plugins>75 * <flow id="">76 * <step id="s1" from="" to="transformer">77 * <conditions>78 * <condition variable="incoming" value="true"/>79 * </conditions>80 * </step>81 * </flow>82 * </dmf>83 */84 public class StartupDMF{85 86 FlowFramework m_flowFramework = null;87 88 public StartupDMF() throws CreationException {89 startup();90 }91 92 public void startup() throws CreationException {93 94 try {95 //TODO SHOULD be replaced by common XML format using XSLT.96 HashMap hashMap = new HashMap ();97 //set the declaration tag name as plugin98 hashMap.put(Declaration.DECLARATION_TAG_NAME, "plugin");99 100 //load the configuration file using the DMFFlowFramework101 String configFile = MantaAgent.getInstance().getSingletonRepository().getConfigManager().getStringProperty("plug-ins.dmf.config-file");102 ConfigurationLoader loader = new ConfigurationLoader(configFile,hashMap){103 protected FlowFramework generateEmptyFramework() {104 return new DMFFlowFramework();105 }106 };107 m_flowFramework = loader.loadFramework();108 } catch (Exception e) {109 throw new CreationException("Load dataManipulationFramework.xml failed,",e);110 }111 }112 113 public FlowFramework getFlowFramework() {114 return m_flowFramework;115 }116 117 }118
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
|
http://kickjava.com/src/org/mr/kernel/dmf/StartupDMF.java.htm
|
CC-MAIN-2017-30
|
en
|
refinedweb
|
java.util.Collection ;26 27 import org.xml.sax.SAXException ;28 29 public interface MappingHandler30 {31 public void startMapping(String namespace, String localName, Collection tableMappings,32 Collection columnMappings, MappingLocator locator)33 throws SAXException ;34 35 public void endMapping(String namespace, String localName,36 Collection tableMappings, Collection columnMappings)37 throws SAXException ;38 39 public void mappingComplete(TableMapping mapping)40 throws SAXException ;41 }42
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
|
http://kickjava.com/src/org/xquark/mapper/mapping/MappingHandler.java.htm
|
CC-MAIN-2017-30
|
en
|
refinedweb
|
MP(3X) MP(3X)
NAME
mp, madd, msub, mult, mdiv, mcmp, min, mout, pow, gcd, rpow, itom,
xtom, mtox, mfree - multiple precision integer arithmetic
SYNOPSIS
#include <<mp.h>>
madd(a, b, c)
MINT *a, *b, *c;
msub(a, b, c)
MINT *a, *b, *c;
mult(a, b, c)
MINT *a, *b, *c;
mdiv(a, b, q, r)
MINT *a, *b, *q, *r;
mcmp(a,b)
MINT *a, *b;
min(a)
MINT *a;
mout(a)
MINT *a;
pow(a, b, c, d)
MINT *a, *b, *c, *d;
gcd(a, b, c)
MINT *a, *b, *c;
rpow(a, n, b)
MINT *a, *b;
short n;
msqrt(a, b, r)
MINT *a, *b, *r;
sdiv(a, n, q, r)
MINT *a, *q;
short n, *r;
MINT *itom(n)
short n;
MINT *xtom(s)
char *s;
char *mtox(a)
MINT *a;
void mfree(a)
MINT *a;
DESCRIPTION
These routines perform arithmetic on integers of arbitrary length. The
integers are stored using the defined type MINT. Pointers to a MINT
should be initialized using the function itom(), which sets the initial
value to n. Alternatively, xtom() may be used to initialize a MINT
from a string of hexadecimal digits. mfree() may be used to release
the storage allocated by the itom() and xtom() routines.
madd(), msub() and mult() assign to their third arguments the sum, dif-
ference, and product, respectively, of their first two arguments.
mdiv() assigns the quotient and remainder, respectively, to its third
and fourth arguments. sdiv() is like mdiv() except that the divisor is
an ordinary integer. msqrt produces the square root and remainder of
its first argument. mcmp() compares the values of its arguments and
returns 0 if the two values are equal, a value greater than 0 if the
first argument is greater than the second, and a value less than 0 if
the second argument is greater than the first. rpow raises a to the
nth power and assigns this value to b. pow() raises a to the bth
power, reduces the result modulo c and assigns this value to d. min()
and mout() do decimal input and output. gcd() finds the greatest com-
mon divisor of the first two arguments, returning it in the third argu-
ment. mtox() provides the inverse of xtom(). To release the storage
allocated by mtox(), use free() (see malloc(3V)).
Use the -lmp loader option to obtain access to these functions.
DIAGNOSTICS
Illegal operations and running out of memory produce messages and core
images.
FILES
/usr/lib/libmp.a
SEE ALSO
malloc(3V)
7 September 1989 MP(3X)
|
http://modman.unixdev.net/?sektion=3&page=mout&manpath=SunOS-4.1.3
|
CC-MAIN-2017-30
|
en
|
refinedweb
|
Windows Mixed Reality enables a user to see holograms as if they are right around you, in your physical or digital world. At its core, both HoloLens and the Desktop PCs you attach headset accessories to are Windows 10 devices; this means that you're able to run almost all of the Universal Windows Platform (UWP) apps in the Store as 2D apps.
Microsoft has been rapidly evolving the Windows platform over the past few years. That means many developers have different starting points even if they deliver an app to the Windows 10 Store today on Desktop, Mobile, or Xbox. This guide will focus on helping you get started when you have an existing app that you are trying to bring to mixed reality headsets, no matter where you're starting from.
To build an app for Windows Mixed Reality headsets, you must target the Universal Windows Platform - the developer platform introduced in Windows 10. That means to bring your app to HoloLens, you must first ensure it targets the Windows 10 Universal Windows Platform (UWP). We'll talk about ways that you can restrict your app specifically to the HoloLens device using the Windows.Holographic device family below. Here are all the potential starting points you may have with your app today:
Congratulations! Your app is now using the Windows 10 Universal Windows Platform (UWP). Your app is now capable of running on todays Windows devices like Desktop, Mobile, Xbox, and HoloLens as well as future Windows devices.
Now let's jump into your AppX manifest to ensure your Windows 10 UWP app can run on HoloLens.
If your app already runs on Desktop PCs, you're set to have your app run as a 2D slate in mixed reality. To target HoloLens as well, you must ensure you are targeting the "Windows.Universal" device family.
<Dependencies> <TargetDeviceFamily Name="Windows.Universal" MinVersion="10.0.10240.0" MaxVersionTested="10.0.10586.0" /> </Dependencies>
If you do not use Visual Studio for your development environment, you can open AppXManifest.xml in the text editor of your choice to ensure you're targeting the Windows.Universal TargetDeviceFamily.
Now that your UWP app targets "Windows.Universal", let's build your app and run it in the HoloLens Emulator.
At this point, one of two things can happen:
HoloLens Development Edition is a new device target of the Windows 10 operating system, so there are Universal Windows Platform APIs that are still undergoing testing and development. We've experienced our own challenges bringing Microsoft UWP apps to HoloLens.
Here are some high level areas that we've found to be a problem:
To get to the bottom of what's causing your UWP app not to start on HoloLens, you'll have to debug.
These steps will walk you through debugging your UWP app using the Visual Studio debugger.
As mentioned above, there are known issues with APIs under testing and development for the HoloLens Development Edition. If you find that your app uses one of the APIs in the namespaces listed as having potential problems, use the Windows Feedback tool to send feedback to Microsoft.
How to open the Windows Feedback tool
We are continually fixing platform bugs in the APIs of UWP. For APIs that are failing by design - because they are not supported on HoloLens - here are the patterns that you can expect in your app and design around:
Error codes
Collections
Asynchronous functions
Events
Now that your UWP app is running on Desktop headsets and/or HoloLens as a 2D hologram, next we'll make sure it looks beautiful. Here are some things to consider:
HoloLens uses advanced depth sensors to see the world and see users. This enables advanced gestures like bloom and air-tap. Powerful microphones also enable voice experiences. With Desktop headsets, users can use motion controllers to point at apps and take action. Windows takes care of all of this complexity for UWP apps, translating your gaze, gestures, voice and motion controller input to pointer events that abstract away the input mechanism. For example, a user may have done an air-tap with their hand or pulled the Select trigger on a motion controller, but 2D applications don't need to know where the input came from - they just see a 2D touch press, as if on a touchscreen.
Here are the high level concepts/scenarios you should understand for input when bringing your UWP app to HoloLens:
Voice input is a critical part of the mixed reality experience. We've enabled all of the speech APIs that are in Windows 10 powering Cortana when using a headset.
Once your app is up and running, package your app to submit it to the Universal Windows Store.
|
https://developer.microsoft.com/en-us/windows/mixed-reality/building_2d_apps
|
CC-MAIN-2017-30
|
en
|
refinedweb
|
1.1.1.2 Security Identifiers (SIDs)
The security identifier (SID), as specified in [MS-DTYP] section 2.4.2, is an account identifier. It is variable in length and encapsulates the hierarchical notion of issuer and identifier. It consists of a 6-byte identifier authority field that is followed by one to fourteen 32-bit subauthority values and ends in a single 32-bit relative identifier (RID). The following diagram shows an example of a two-subauthority SID.
Figure 2: Windows SID with subauthorities
The original definition of a SID called out each level of the hierarchy. Each layer included a new subauthority, and an enterprise could lay out arbitrarily complicated hierarchies of issuing authorities. Each layer could, in turn, create additional authorities beneath it. In reality, this system created a lot of overhead for setup and deployment and made the management model group even more complicated. The notion of arbitrary depth identities did not survive the early stages of Windows development; however, the structure was too deeply ingrained to be removed.
In practice, two SID patterns developed. For built-in, predefined identities, the hierarchy was compressed to a depth of two or three subauthorities. For real identities of other principals, the identifier authority was set to five, and the set of subauthorities was set to four.
Whenever a new issuing authority under Windows is created, (for example, a new machine deployed or a domain is created), it is assigned a SID with an arbitrary value of 5 as the identifier authority. A fixed value of 21 is used as a unique value to root this set of subauthorities, and a 96-bit random number is created and parceled out to the three subauthorities with each subauthority that receives a 32-bit chunk. When the new issuing authority for which this SID was created is a domain, this SID is known as a "domain SID".
Windows allocates RIDs starting at 1,000; RIDs that have a value of less than 1,000 are considered reserved and are used for special accounts. For example, all Windows accounts with a RID of 500 are considered built-in administrator accounts in their respective issuing authorities.
Thus, a SID that is associated with an account appears as shown in the following figure.
Figure 3: SID with account association
For most uses, the SID can be treated as a single long identifier for an account. By the time a specific SID is associated with a resource or logged in a file, it is effectively just a single entity. For some cases, however, it can conceptually be treated as two values: a value that indicates the issuing authority and an identifier that is relative to that authority. Sending a series of SIDs, all from the same issuer, is one example: the list can easily be compressed to be the issuer portion and the list of IDs that is relative to that issuer.
It is the responsibility of the issuing authority to preserve the uniqueness of the SIDs, which implies that the issuer does not issue the same RID more than one time. A simple approach to meeting this requirement is to allocate RIDs sequentially. More complicated schemes are certainly possible. For example, Active Directory uses a multimaster approach that allocates RIDs in blocks. It is possible for an issuing authority to run out of RIDs; therefore, the issuing authority is required to handle this situation correctly. Typically, the authority is retired.
Windows supports the concept of groups with much the same mechanisms as individual accounts. Each group has a name, just as the accounts have names. Each group also has an associated SID.
User accounts and groups share the same SID and namespaces. Users and groups cannot have the same name on a Windows-based system nor can the SID for a group and a user be the same.
For access control, Windows makes no distinction between a SID that is assigned to a group or one assigned to an account. Changing the name of a user, computer, or domain does not change the underlying SID for an account. Administrators cannot modify the SID for an account, and there is generally no need to know the SID that is assigned to a particular account. SIDs are primarily intended to be used internally by the operating system to ensure that accounts are uniquely identified in the system.
|
https://msdn.microsoft.com/en-us/library/jj663149
|
CC-MAIN-2017-30
|
en
|
refinedweb
|
csGenerateImageTexture Class ReferenceA base class which represents a texture that can be displayed on the terrain. More...
#include <cstool/gentrtex.h>
Inheritance diagram for csGenerateImageTexture:
Detailed DescriptionA base class which represents a texture that can be displayed on the terrain.
It has a colour for each pixel
Definition at line 58 of file gentrtex.h.
Constructor & Destructor Documentation
delete it
Definition at line 62 of file gentrtex.h.
Member Function Documentation
get color (0..1) for location
Implemented in csGenerateImageTextureSolid, csGenerateImageTextureSingle, and csGenerateImageTextureBlend.
The documentation for this class was generated from the following file:
- cstool/gentrtex.h
Generated for Crystal Space 1.2.1 by doxygen 1.5.3
|
http://www.crystalspace3d.org/docs/online/api-1.2/classcsGenerateImageTexture.html
|
CC-MAIN-2017-30
|
en
|
refinedweb
|
Agenda
See also: IRC log
<scribe> scribe:TRutt
<scribe> scribe:TRutt
Jonathan stated that Microsoft has already implemented several of the FaultCodes that we agreed to eliminate on Thursday.
We agreed to put back DestUnreachable, ActionNotSupported, and EndpointNotAvailable
Katy asked if these had to be implemented to claim conformance
Bob F stated that they do not have a MUST constraint anywhere in the spec,
David: Implementation of these faults are optional (for the last three we are adding back in)
Agreed to add this note to the descriptions of each of these three faults
RESOLUTION: Insert after the description of "destination unreachable" action not supported" and endpoint not availalble, the following: "implementation of this fault is optional."
<Jonathan>
Paul: we lost the asynchronous tests from SUN. The test implementations need more work.
<pauld> as does the report
<pauld> and collection of logs which are dip feeding in from various sources asynchronously
Jonathan: microsoft/IBM and IBM/microsoft, IBM/IBM, and microsoft/Microsoft are all OK.
Marc H: there could be a problem with the logs, our developers state the implementation is correct.
Paul: we could have problems with collecting the logs, or with generation of the report. Some of the "green" boxes could be wrongly idenfitied as well.
Jonathan: I do not think we have
fundamental issues.
... the biggest risk is that we do not have 4 complete implemenations.
Glen: once we fix some of the AXIS problems, that could be fixed.
Paul: we might have 5 implementations, but some of the features may have a different set of 4 implementations.
Bob F: are there any changes which need to be made to the spec.
Paul: I do believe we have found the problems in the text already, and we have already reported those to the group.
Bob F: what remains is documentation of 4 interoperating implemtations for all of the test cases.
Bob F: would be good to have the test documentation completed by March 13 WG teleconf.
Hugo: intention to go to PR from the WG on March 13.
<Zakim> Jonathan, you wanted to close with no action
Bob F stated that the at risk feature for wsa:from has been stated by several groups as a feature that they have a use case for.
Paul D: none of the implementations have used this feature.
Paul D: My statement is incorrect. there are implementations which send wsa:From. We could add one test case to show that implemtatations can deal with it.
Marc G: since its optional we need two implementations to send it.
<pauld> notes that a test case could be 'don't bail on reception of a message with wsa:From' and then make sure three others interop with WSO2
Umit: I recommend that we close this issue with no change. Even though there is no explicit fallback to use of wsa:from in the behaviour text for replies, some users have found a reason to use this.
<bob> scribenick: TRutt
Jonathan: the spec has the field, but does not say what to do.
RESOLUTION: agreed to remove the "at risk" note from the CR text, and close the features at risk issue.
Glen stated that the axis implementation can generate the wsa:from element, and can do so for testing purposes.
Hugo: we should ensure that the complete implementations can all receive messages with all the optional features.
Paul D: we should add a test case which has wsa:from with mustUnderstand=true.
Hugo took the action to add the new test case for wsa:from generation with mustUnderstand=true
Bob F: this is a posponed issue on [Details] for wsa:ActionMismatch Subsubcode)
Dave H: add text to section 2.4 of soap binding- "This invalid addressing header fault MUST/SHOULD contain a problem action fault detail.
Jonathan: I do not think we need to do this, given the existing text.
RESOLUTION: agreed to close CR2 with no action.
<dhull>
<bob> This will be cr25
tile of issue Use SOAP 1.2 properties where possible in describing SOAP 1.2 behavior.
There were no objections to accepting the proposal, as an editorial change
RESOLUTION: accept D Hull proposal for CR25 as editorial change
John Kemp sent concern in email
Discussion led to conclusion that the original resolution was still acceptable to WG
Jonathan: we can point to the
existence of xml:id.
... there is no local id attribute because xml:id exists.
RESOLUTION: Editor will provide additional text pointing to existence of xml:id
Bob F: we need to have someone give a response to John Kemp
Jonathan took action to prepare response to John Kemp on our reconsideration of CR3
Bob F asked if there are any additional issues on the Core spec.
There were no further issues identified.
Bob F: since we have no further issues, we have to produce the final version of the docs to review, and recommend for progression to PR.
Glen: when will the formal objections be reviewed and resolved.
Mark N: the director has seen the formal objections, and has not sent feedback to the WG.
Glen: what message did he send.
Marc N: the messages was "yes you can go to CR"
Hugo: until we are in recommendation, the director can say we have to resolve a particular formal objection. Officially we have to wait to see if he will assert on these objections, which he has already seen.
Bob
Bob F asked if any companies change their allignment with either of the formal objections. No company wished to change their alignment.
Bob F asked the editors when they will have documents incorporating all resolutions.
Marc H stated the documents would be ready by the end of the day.
Bob F: I ask all members to review and send any coments before March 13. The intent is to vote to progress to PR at the March 13 meeting.
Hugo: we should publish the soap 1.1 note at the same time as the PR.
Bob F: Hugo will schedule a call with the Director after we vote to go to PR. The PR should be availalbe by the end of March, give a WG decision on March 13.
s/LC01/LC109/
D hull: you can leave the faultTo empty if you have a replyTo present.
Marc H: we need to clarify "either this or that"
Katy: the fault endpoint may be able to be derived by different means than presence in the request
discussion ensued on table 5-5
Marc H : suggest comgining reply endpoint and fault endpoint rows.
Dave H: for Table 5.5, and Table 5.7 , and Table 5.2, we can change entroy for fault endpoint to Y with asterisk, with asterisk pointing to reference to section 3 of core.
Anish: I prefer the merger of the two rows into "response endpoint".
Call attention to section 3.4 of core regarding the fault endpoint semantics.
Umit: I would rather not combine the two rows, I prefer Dave H proposal.
Jonathan: concensus that one of reply endpoint or reply endpoint properties should appear in table 5.5.
<TonyR> a/reply endpoint or fault/reply endpoint or fault/
<umit> i would really like an example just like Anish. Combination of the specs need to be illustrated.
Dave H: it should be clarified that for robust in only the fault endpoint defaults to the reply endpoint.
RESOLUTION: LC109 - at least one of reply endpoint or fault endpoint properties should appear in table 5.5, with some indication of the co-constraints in the core spec
Jonathan: this applies to table 5.6 and 5.8.
Marc: table 5.6 was the first place it applied.
Anish: what is good for reply is good for fault. We should not prohibit.
Marc H: you canot give a fault to a fault message in soap.
Tony R: all fault messages should have this note, not just the robust meps.
Dave H stated that we should not prohibit a fault to a fault.
<dhull> Just wondering whether it's our job to try to prohibit fault to a fault
Tony R: in 5.2.3 we do not distinguish fault.
Marc H: that is because it is an out.
RESOLUTION: LC110 agreed to be closed with no action.
Bob F: this came from the WSDL 2 group.
Hugo: the proposal is " Section 3.1.1 says:
A property of the binding or endpoint named {addressing required}
of type xs:boolean
should be phrased:
A property {addressing required} of type xs:boolean, to the
Binding and Endpoint components
Marc H: typically you put it on one place or another, not both
RESOLUTION: LC111 resolved by accepting Hugo proposal.
Marc H: you either do or do not support addressing
Tony R: addressing required is a mandatory boolean property.
Anish: supported but not required forces a false value for "addressing required".
Glen: we could have values "none"
, required, and optional
... having two values "required" and "optional" for this attribute.
Jonathan summarized on the whiteboard what we have today.
Jonathan: if using is on binding ti goes to both binding and endpoint. If using is on endpoint, it does not go to binding.
Anish: I like having the values "required" and "optional" string enumerations.
Jonathan: we need to clarify 3.1.1 to say what we want. I do not like calling these "required" properties. if the existence of the property depends on the syntax of "using addressing"
Tony: could be change "required" propety to "mandatory" property.
Jonathan: exisence of using addressing on binding results in the required attribute to be true in binding.
Anish: I do not understand what required means in this context.
Katy: I challenge our earlier decision to put using addressing on endpoint
<uyalcina> +1 to Katy
<anish> i see the following columns for the table: 1) what's in the syntax, 2) does the processor recognize the extension, 3) wsdl:required value, 4) property present in the component model 5) value of the property in the component
Jonathan: we can restrict the rules for bindings to be separate as for endpoint.
Anish proposed a table with 5 columns to express the semantics of using addressing to components.
Jonathan: we need to remove the word required in the text, and to have a link from binding using addressing to binding component, and another link form endpoint using addressing to endpoint component.
Group broke for lunch, to think about proper resolution for this lc issue
<Jonathan> Scribe: Jonathan
Marc has checked in draft PR docs.
Hugo will send a link to the snapshot to the WG.
Bob: before lunch, Anish was thinking of a table. Jonathan was talking about a list of actions to resolve this issue.
Jonathan: Proposal:
... Section 3.1.1: remove the word REQUIRED.
... Describe that UsingAddressing on Binding annotates Binding component,
... Describe that UsingAddressing on Endpoint annotates Endpoint component,
... Possibly note that both values must be considered to determine whether Addressing is required for a particular message.
Anish: wsdl:required should not
be used directly as the value of the property.
... Might want a different value set for the property.
Glen: Would be nicer if the
property existed or not, and then a separate property for the
value.
... If wsdl:required=true the value should be "required", if wsdl:required=false or not there, the value should be "optional".
Tony: I like "addressing support" with the possible values of "required", "optional", "not known"/"not supported"
Anish: If the sytnax doesn't
exist, you can still add the property.
... The processor doesn't understand addressing, it won't be a property in the component model.
Jonathan: Be clear on component model - it's subtle.
Anish: Just change the naming of the property.
Bob: Regardless of the name, we have relationships. We need first to get relationship.
Tony: Want to allow the property appear even when there is no UsingAddressing extension.
Marc: Strange since you're telling people about yourself.
Tony: Want the third property to
encompass that there is no addressing support here.
... Not happy that absence implies it's supported.
... How can I describe the service as not supporting addressing?>
Anish: We don't have such a marker.
Katy: Send a mustUnderstand="1" and get an error back.
<marc> can't imagine a service ever saying wsa:addressing_support="unknown"
Tony: Thought that was what the tri-state is about.
Marc: It really only has two states. You either require it, or you don't.
Glen: It says, I support, or you have to use.
Marc: You don't have to tell I support. No new information provided.
Anish: We have three states in
the value.
... and presence.
marc: But if it's not present, it's not telling you that the service requires addressing.
Anish: Maybe the service isn't telling you but it is still required.,
(don't go there)
Glen: Difference between
supported and required, and no indication.
... If you understand addressing, you've gotta do it.
Marc: from the services
perspective, you say UA=true when it's required.
... when it's not, you put it in or not.
... doesn't matter which.
Glen: How does that map to component model properties? Should be this addressing use property.
Anish: If you require addressing, you must put UsingAddressing in the WSDL?
Marc: Not that far, just if you require it and you want an interoperable mechanism, use this one.
Umit: Provision in WSDL for this marker. We went through this well in WSDL.
Marc: My proposal is equivalent to {prop}="true" or not there. "false" doesn't give you any other information.
Glen: "false" tells you that you
understood the marker at least.
... If you don't support WS-A in the syntax, you shouldn't support it in the component model.
Jonathan: Thinks there is a difference. "False" means you can send the headers with mu and not expect a fault.
Glen: If you see this at the WSDL
processing time, I will know whether I can send those headers
or not. I might want to fault if the service doesn't delcare
support for WS-A.
... Tricky case is the optional one. We're overloading the marker to mean "process the WSDL" and "allows WS-A".
... Once its in the component model, you're supposed to use WS-A at that point.
Tony: wsdl:required is different than addressing required.
Glen: Those concepts do get confused, both in SOAP and WSDL. We've been trying to overload that.
<Zakim> anish, you wanted to ask marc how his view would look like in terms of the component model
Jonathan: Believe they collapse in practice.
Anish: Spec says if you put an
annotation in and say wsdl:required, it may change the meaning
of the element to which it's attached.
... The qname we've defined means that you have to use addressing if required=true.
... We define the meaning of the QName.
Glen: It says the meaning of
[parent] might change when you understand [extension].
... in practice it works out Ok. It's your choice to "understand" or not.
Anish: Can you translate how that translates to the component model.
Marc: Not up on cm speak.
Glen: If you don't understand the element you can't annotate the cm.
Umit: Provider puts the extension in. Client's perspective it contributes to the component model.
Glen: Choices are: understand the required extension or fault.
<anish> it turns out that it is not us who are getting confused about overloading wsdl:required. The wsdl spec says the following:
<anish> "A key purpose of an extension is to formally indicate (i.e., in a machine-processable way) that a particular feature or convention is supported or required."
<Zakim> marc, you wanted to talk about table 3.1
(scribe loses a bit)
Marc: I'm willing to change my
position. Spec says there are different requirements on an
endpoint depending on whether they've put this in the WSDL or
not.
... Table 3-1. Key difference is MAPs in input message.
... If you put it in, you have to accept addressing headers.
... Component model does need to reflect tri-state.
Bob: Tri-state issue, seems discussion revolves around whether this.
Katy: Required/supported/not-known are the states in the table.
Glen: This is a problem with WSDL, I don't know what properties there are, what good does an abstract model do?
Jonathan: That's taken care of us by WSDL - read the Extensions part.
Bob: What's the proposal then?
Jonathan repeats. Essentially asking for a mapping of XML to component model.
Hugo: I proposed a mapping
originally, Marc said it duplicated a lot of things.
... Group said we can keep what we have.
Jonathan: Asserts that failed to
be clear enough.
... Would like to follow WSDL's style.
Bob: Fundamental objections?
Glen: Takes the WSDL required value rather than a different value.
Anish: Call it {addressing} =
"required
... " / "optional"
Glen: call it {using addressing}
Jonathan: Proposal:
<Jonathan> ... Section 3.1.1: remove the word REQUIRED.
scribe: Add mapping of XML to
component model
... Possibly note that both properties must be consulted.
... Change property name to {addressing}, values "required" / "optional"
Glen: Namespace properties?
Jonathan: Didn't namespace the core WSDL properties. No framework for doing that.
Bob: Accept the proposal?
Katy: Looking at 3.2.1, might be impacts there.
Marc: Needs more editorial direction.
Hugo: I'll draft some text.
<scribe> ACTION: Hugo to draft mapping to CM of UsingAddressing. [recorded in]
RESOLUTION:
Proposal (see above) accepted as resolution of
lc112.
... and equivalent editorial changes to 3.2.1.
ACTION 1=Hugo to draft mapping of CM to UsingAddressing and Anonymous
ACTION 1=Hugo to draft mapping of CM to UsingAddressing and Anonymous, and Action
Hugo: This section should be
translated into WSDL 2.0 lingo (CM instead of XML)
... Also not too clear on where it is allowed, have to chase references to intersect where UsingAddressing and soap:module are both allowed.
... We should do the math for our readers. The intersection is the Binding component.
Glen: Not on endpoint?
... Didn't we fix that at some point?
Hugo checks.
Glen: Need separate bindings for differnet module combinations? Too bad.
Bob: Objections?
Umit: UsingAddressing and soap:module have different expressive powers?
Glen: Researching a WSDL issue, we need soap:module at endpoint level.
Umit: Doesn't like this at the endpoint. Would rather we removed UsingAddressing at the endpoint.
Bob: Proposal acceptable as it stands?
Umit: Need to check editorially that we don't imply UA and module are equivalent.
Glen: Equivalent semantics, but not equivalent places you can put it.
RESOLUTION: lc113
closed by accepting Hugo's proposal
... plus editorial check that we don't imply UA and module are completely equivalent.
RESOLUTION: Fix
typos.
... lc114 closed by fixing typos.
RESOLUTION: lc115 closed by accepting fix.
Bob: Doesn't say how this extension affects the WSDL 2.0 CM.
Jonathan: Let's do it.
... property is a list of elements.
Anish: What about attributes on
the wsa:ReferenceParameters element? Are they lost?
... Isn't there a type for reference parameters already?
... So use the same type as the [reference parameters] property in 2.1 of Core.
Umit: We discussed it, we just forgot?
Proposal: Add a {reference parameters} property to the WSDL 2.0 component model, with the same type as the [reference parameters] property in Core 2.1
RESOLUTION: lc116 closed with proposal above.
Bob: Call on the 13th is the last opportunity to touch the document. Our fundamental reason for the call is to pass the document off. The meeting will last as long as it takes.
Jonathan: suggests we require any comments to be accompanied with a complete and detailed proposal.
<pauld> +1 to Jonathan
RESOLUTION: Comments must be accompanied with a detailed proposal.
Anish: Once it goes to PR it's out of our hands.
Hugo: It's in my hands, so small changes (e.g. editorial) can be made as a result of AC or other comments.
Bob: Would like to hear from the
test group after the meeting on tuesday, if there are issues
that mean we'll slip the date we'll have to reschedule.
... No reason to hold the 8th call - cancelled.
... FTF in Cambridge, MA for May 4-5(?) by IBM.
... Pencil in the FTF.
Jonathan: Thinks we'll need it.
Bob: Don't buy tickets quite yet
though.
... This is your 8 week notice of the FTF. Possibility it might be cancelled though.
Ajourned...
|
http://www.w3.org/2002/ws/addr/6/03/03-ws-addr-minutes.html
|
CC-MAIN-2017-30
|
en
|
refinedweb
|
#include <Modelx.h>
List of all members.
Constructor
Destructor
Intersection method using bounding spheres between this and another model.
Creates the dynamic bounding sphere with its bounding sphere hierarchy.
Creates the default animation if one exists.
Returns the current animation.
Returns the absolute elapsed time.
Init the given model with the core model.
Intersection of a ray with a model.
Returns if an animation is current running.
Renders the given model.
Resets and cleans up the model.
Sets the elapsed time by frame.
Sets the elapsed time. Time is clamped between the current animation time.
Sets the next animation.
Updates the given model by time.
The dynamic bounding sphere of tis model.
Orientation of the model in world space.
Position of the model in world space.
Scaling of the model.
Flag, if quaternions should be used. By default on.
|
http://es3d.sourceforge.net/doxygen/class_e_s3_d_1_1_modelx.html
|
CC-MAIN-2017-30
|
en
|
refinedweb
|
NAME
mono - Mono's ECMA-CLI native code generator (Just-in-Time and Ahead- of-Time)
SYNOPSIS
mono [options] file [arguments...] mono-sgen . The mono command uses the Boehm conservative garbage collector while the mono-sgen command uses a moving and generational garbage collector.. outfile=[filename] Instructs the AOT compiler to save the output to the specified file. write-symbols Instructs the AOT compiler to emit debug symbol information. save-temps,keep-temps Instructs the AOT compiler to keep temporary files. threads=[number] This is an experimental option for the AOT compiler to use multiple threads when compiling the methods. nodebug Instructs the AOT compiler to not output any debugging information. ntrampolines=[number] When compiling in full aot mode, the method trampolines must be precreated in the AOT image. You can add additional method trampolines with this argument. Defaults to 1024. nrgctx-trampolines=[number] When compiling in full aot mode, the generic sharing trampolines must be precreated in the AOT image. You can add additional method trampolines with this argument. Defaults to 1024. nimt-trampolines=[number] When compiling in full aot mode, the IMT trampolines must be precreated in the AOT image. You can add additional method trampolines with this argument. Defaults to 128. print-skipped-methods If the AOT compiler cannot compile a method for any reason, enabling this flag will output the skipped methods to the console.)); For more information about AOT, see:- project.com/AOT -: transport=transport_name This is used to specify the transport that the debugger will use to communicate. It must be specified and currently requires this to be 'dt_socket'. instructs the Mono runtime to start debugging in server mode, where Mono actively waits for the debugger front end to connect to the Mono process. Mono will print out to stdout the IP address and port where it is listening. -. --help, -h Displays usage instructions. --llvm If the Mono runtime has been compiled with LLVM support (not available in all configurations), Mono will use the LLVM optimization and code generation engine to JIT or AOT compile. For more information, consult:- project.com/Mono_LLVM --nollvm When using a Mono that has been compiled with LLVM support,, before using these flags for a deployment setting, you might want to actually measure the benefits of using them. The following optimizations are implemented:. --runtime=VERSION Mono supports different runtime versions. The version used depends on the program that is being run or on its configuration file (named program.exe.config). This option can be used to override such autodetection, by forcing calling it with the "cas" parameter. The following modes are supported: cas This allows mono to support declarative security attributes, e.g. execution of Code Access Security (CAS) or non-CAS demands. core-clr Enables the core-clr security system, typically used for Moonlight/Silverlight applications. It provides a much simpler security system than CAS, see- project.com/Moonlight, a no-op). -. - `program', runtime. namespace, System.String except for the System.String:Concat method. mono --trace=T:System.String,-M:System.String:Concat intruction. --break method Inserts a breakpoint before the method whose name is `method' (namespace.class:methodname). Use `Main' as method name to insert a breakpoint on the application's main method. -view generator selector]] Mono has a built-in profiler called 'default' (and is also the default if no arguments are specified), but developers can write custom profilers, see the section "CUSTOM PROFILERS" for more details. If a profiler is not specified, the default profiler is used. The profiler_args is a profiler-specific string of options for the profiler itself.. For example: mono --profile program.exe That will run the program with the default profiler and will do time and allocation profiling. mono --profile=default:stat,alloc,file=prof.out program.exe Will do sample statistical profiling and allocation profiling on program.exe. The profile data is put in prof.out.).
LOG PROFILER
This is the most advanced mprof-report(1) tool. More information about how to use the log profiler is available on the mprof-report(1) page. Mono LVM_COUNT When Mono is compiled with LLVM support, this instructs the runtime]" where V is the architecture number 4, 5, 6, 7 and the options can be currently be "thunb". Example: MONO_CPU_ARCH="armv4 thumb" mono ..._EVENTLOG_TYPE Sets the type of event log provider to use (for System.Diagnostics. nursery-size=size Sets the size of the nursery. The size is specified in bytes and must be a power of two. The suffixes `k', `m' and `g' can be used to specify kilo-, mega- and gigabytes,-par' for parallel Mark&Sweep, `marksweep-fixed' for Mark&Sweep with a fixed heap, `marksweep-fixed-par' for parallel Mark&Sweep with a fixed heap and `copying' for the copying collector. The Mark&Sweep collector is the default. major-heap-size=size Sets the size of the major heap (not including the large object space) for the fixed-heap Mark&Sweep collector (i.e. `marksweep-fixed' and `marksweep-fixed-par'). The size is in bytes, with optional suffixes `k', `m' and `g' to specify kilo-, mega- and gigabytes, respectively. The default is 512 megabytes. wbarrier=wbarrier Specifies which write barrier to use. Options are `cardtable' and `remset'. The card table barrier is faster but less precise, and only supported for the Mark&Sweep major collector on 32 bit platforms. The default is `cardtable' if it is supported, otherwise `remset'. The cardtable write barrier is faster and has a more stable and usually smaller memory footprint. If the program causes too much pinning during thread scan, it might be faster to enable remset. evacuation-threshold=threshold Sets the evacuation threshold in percent. This option is only available on the Mark&Sweep major collectors.-)concurrent-sweep Enables or disables concurrent sweep for the Mark&Sweep collector. If enabled, the sweep phase of the garbage collection is done in a thread concurrently with the application. Concurrent sweep is disabled. xdomain-checks Performs a check to make sure that no references are left to an unloaded AppDomain. clear-at-gc This clears the nursery at GC time instead of doing it when the thread local allocation buffer (TLAB) is created. The default is to clear the nursery at TLAB creation. check-scan-starts If set, does a plausibility check on the scan_starts before and after each collection. Additionally. Mono includes a profiler module which allows one to track what adjustements to file paths IOMAP code needs to do. The tracking code reports the managed location (full stack trace) from which the IOMAP-ed call was made and, on process exit, the locations where all the IOMAP-ed strings were created in managed code. The latter report is only approximate as it is not always possible to estimate the actual location where the string was created. The code uses simple heuristics - it analyzes stack trace leading back to the string allocation location and ignores all the managed code which lives in assemblies installed in GAC as well as in the class libraries shipped with Mono (since they are assumed to be free of case- sensitivity issues). It then reports the first location in the user's code - in most cases this will be the place where the string is allocated or very close to the location. The reporting code is implemented as a custom profiler module (see the "PROFILING" section) and can be loaded in the following way: mono --profile=iomap yourapplication.exe Note, however, that Mono currently supports only one profiler module at a time., maximum number of threads in the general threadpool will be 20 + (MONO_THREADS_PER_CPU * number of CPUs). The default value for this variable is 10.- invoke. standard error, respectively. If it's set to Console.Out or Console default specific method. create information on this subject see the- project.com/Config_system.web page.
MAILING LISTS
Mailing lists are listed at the- project.com/Mailing_Lists
WEB SITE
SEE ALSO
certmgr(1), csharp(1), mcs(1), mdb(1), monocov(1), monodis(1), mono- config(5), mozroots(1), pdb2mdb(1), xsp(1), mod_mono(8). For more information on AOT: For ASP.NET-related documentation, see the xsp(1) manual page Mono(Mono 2.5)
|
http://manpages.ubuntu.com/manpages/precise/man1/mono.1.html
|
CC-MAIN-2017-30
|
en
|
refinedweb
|
Your message dated Fri, 27 Apr 2007 15:02:33 +0000 with message-id <E1HhRxx-0001Xj-8c@ries.debian.org> and subject line Bug#218335: fixed in ncurses-hexedit 0.9.7: ncurses-hexedit: patch for GNU/Hurd
- From: Santiago Vila <sanvila@unex.es>
- Date: Thu, 30 Oct 2003 17:36:57 +0100 (CET)
- Message-id: <Pine.LNX.4.58.0310301735500.4534@home.unex.es>Package: ncurses-hexedit Version: 0.9.7-11 Tags: patch There is no PATH_MAX in GNU/Hurd, so this package does not compile from source. The "right" fix for this would be to modify the program so that no PATH_MAX is assumed (doing dynamic allocation of the required space), but for now this patch should be enough: diff -ru ncurses-hexedit-0.9.7.orig/src/hexedit.h ncurses-hexedit-0.9.7/src/hexedit.h --- ncurses-hexedit-0.9.7.orig/src/hexedit.h 1999-08-08 04:20:38.000000000 +0200 +++ ncurses-hexedit-0.9.7/src/hexedit.h 2003-10-30 01:53:41.000000000 +0100 @@ -33,6 +33,9 @@ #ifdef HAVE_LIMITS_H #include <limits.h> #endif +#ifndef PATH_MAX +#define PATH_MAX 4096 +#endif #ifdef STDC_HEADERS #include <stdlib.h> Thanks.
--- End Message ---
--- Begin Message ---
- To: 218335-close@bugs.debian.org
- Subject: Bug#218335: fixed in ncurses-hexedit 0.9.7-13
- From: Matej Vela <vela@debian.org>
- Date: Fri, 27 Apr 2007 15:02:33 +0000
- Message-id: <E1HhRxx-0001Xj-8c@ries.debian.org>Source: ncurses-hexedit Source-Version: 0.9.7-13 We believe that the bug you reported is fixed in the latest version of ncurses-hexedit, which is due to be installed in the Debian FTP archive: ncurses-hexedit_0.9.7-13.diff.gz to pool/main/n/ncurses-hexedit/ncurses-hexedit_0.9.7-13.diff.gz ncurses-hexedit_0.9.7-13.dsc to pool/main/n/ncurses-hexedit/ncurses-hexedit_0.9.7-13.dsc ncurses-hexedit_0.9.7-13_i386.deb to pool/main/n/ncurses-hexedit/ncurses-hexedit_0.9.7-13_i386.deb A summary of the changes between this version and the previous one is attached. Thank you for reporting the bug, which will now be closed. If you have further comments please address them to 218335@bugs.debian.org, and the maintainer will reopen the bug report if appropriate. Debian distribution maintenance software pp. Matej Vela <vela@debian.org> (supplier of updated ncurses-hexedit package) (This message was generated automatically at their request; if you believe that there is a problem with it please contact the archive administrators by mailing ftpmaster@debian.org) -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Format: 1.7 Date: Fri, 27 Apr 2007 16:48:49 +0200 Source: ncurses-hexedit Binary: ncurses-hexedit Architecture: source i386 Version: 0.9.7-13 Distribution: unstable Urgency: low Maintainer: Debian QA Group <packages@qa.debian.org> Changed-By: Matej Vela <vela@debian.org> Description: ncurses-hexedit - Edit files/disks in hex, ASCII and EBCDIC Closes: 218335 291656 Changes: ncurses-hexedit (0.9.7-13) unstable; urgency=low . * QA upload. * src/hexedit.h: Define a fallback value for PATH_MAX on GNU/Hurd. Thanks to Santiago Vila for the patch. Closes: #218335. * src/help.c: Fix typo in help screen. Closes: #291656. * debian/changelog: Remove obsolete Emacs local variables. Files: c99d9ddc728b7bf7b930f6e56cb2855d 631 editors optional ncurses-hexedit_0.9.7-13.dsc 6aaab003110a4c7864de922e2715000f 30008 editors optional ncurses-hexedit_0.9.7-13.diff.gz 3933cce5afdd62a7a5c7458d657c10ac 64048 editors optional ncurses-hexedit_0.9.7-13_i386.deb -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFGMg16qbYs6sQrY8oRAuTfAJ9BddZW6jkAbkqUpx0JisZfY0yOUwCgpiNL 0cQEIpmv+hv9AiOfyIinhxs= =Pbzh -----END PGP SIGNATURE-----
--- End Message ---
|
https://lists.debian.org/debian-qa-packages/2007/04/msg00467.html
|
CC-MAIN-2017-30
|
en
|
refinedweb
|
21
Joined
Last visited
Community Reputation500 Good
About Grain
- RankAdvanced Member
Grain replied to Grain's topic in Graphics and GPU ProgrammingBump.
Grain posted a topic in Graphics and GPU ProgrammingWhen I start create my Speaker and feed it data it seems to only play the first buffer. The method that feeds in buffers first checks needbuffers flag and just returns and does nothing if it's false. My setneedbuffers event handler is never called (I set a break point in it) and needbuffers flag never gets set back to true. If I pause execution and check the state of DynSoundInst it's still set to "playing". DynamicSoundEffectInstance is suposed to stop playback if its starved of data, but that's not the case here. public class Speaker_XNA : Speaker { DynamicSoundEffectInstance DynSoundInst; bool needbuffers = true; short bitspersample; public Speaker_XNA(short BitsPerSample, short Channels, int SamplesPerSecond) { FrameworkDispatcher.Update(); DynSoundInst = new DynamicSoundEffectInstance(SamplesPerSecond, (AudioChannels)Channels); bitspersample = BitsPerSample; DynSoundInst.BufferNeeded += setneedbuffers; } private void setneedbuffers(object sender, EventArgs e) { needbuffers = true; } ... }
Grain replied to Grain's topic in Math and PhysicsI like thing that are evenly spaced, or symmetrical or balanced. Irrational and prime numbers bother me just because you can't make them line up evenly with things.
Grain posted a topic in Math and PhysicsI'm kind of OCD and irrational numbers really bother me. So how many digits of Pi would you need to calculate a circle the size of the known universe with the accuracy of the Planck length? That way, at least in a physical sense Pi does have a practical end and I can stop worrying about it.
Grain replied to Grain's topic in General and Gameplay ProgrammingOh, that BitArray did it! Thanks!
Grain posted a topic in General and Gameplay ProgrammingI am having trouble with this prime number sieve running out of memory. It fails at Values.Capacity = n; Previously Values was just an array defined like this : bool[] Values = new bool[n]; but had the same result. I am passing primesieve an int.MaxValue. static int[] Primesieve(int n) { List<bool> Values = new List<bool>(); Values.Capacity = n; int[] primes = new int[(int)Math.Sqrt(n)]; int primecount = 0; Values[0] = true; Values[1] = true; for (int i = 0; i < Math.Sqrt(n); i++) { if (Values[i] == false) { for (int j = i * i; j < Values.Count; j += i) Values[j] = true; primes[primecount] = i; primecount++; } } return primes; } I believed my problem to be that since they are bools, a value type they are being created on the stack so I did this to force heap allocation. but it still fails at the same place. public class Reference<T> { public T Ref; } static int[] Primesieve(int n) { List<Reference<bool>> Values = new List<Reference<bool>>(); Values.Capacity = n; int[] primes = new int[(int)Math.Sqrt(n)]; int primecount = 0; Values[0].Ref = true; Values[1].Ref = true; for (int i = 0; i < Math.Sqrt(n); i++) { if (Values[i].Ref == false) { for (int j = i * i; j < Values.Count; j += i) Values[j].Ref = true; primes[primecount] = i; primecount++; } } return primes; } My machine has 16gb of ram running win7 64-bit and i even targeted the build specifically to x64. So I know it cat actually be running out of memory as int.MaxValue Booleans should only take around 2gb assuming .Net is not packing 8 of them into 1 byte, and if it is then it should only really take up 265mb. Or am I some how still failing to use the heap?
Grain replied to Grain's topic in General and Gameplay ProgrammingSame idea as a linked list. Only this can potentially branch. Is there some method of keeping track of it automatically? Move to the first child first, then to any of it's children/neighbors next ect, return to the parent when you reach the end, repeat for the next child until there are no more children, move to the neighbor. public class FileParseElement { string token; FileParseElement Neighbor; List<FileParseElement> Children; List<FileParseElement>.Enumerator CurrentChild; public static FileParseElement BuildFullObjectStructure(ref List<string>.Enumerator ittr){...} public static FileParseElement BuildObjectSet(ref List<string>.Enumerator ittr){...} public string Token { get { return token; } } public FPE_Iterator GetIterator() { return new FPE_Iterator(this) { }; } public class FPE_Iterator //this IS a nested class { FileParseElement Head; Stack<FileParseElement> ParrentStack; public FPE_Iterator(FileParseElement FPE) { ParrentStack = new Stack<FileParseElement>(); Head = FPE; } public FileParseElement Current { get { return Head; } } public bool Movenext() { if (Head.Children != null) { if (Head.CurrentChild != new List<FileParseElement>.Enumerator()) //compiler doesn't allow this { ParrentStack.Push(Head); Head = Head.Children.GetEnumerator().Current; return true; } } if (Head.Neighbor != null /*&& I have no children or finished tracversing them*/) { Head = Head.Neighbor; return true; } else { //pop a parent off the stack and assign it to Head return false; } } } } Movenext() is what I'm working on now. It's far from compleet
Grain replied to Grain's topic in General and Gameplay ProgrammingI have a custom container class that is arranged like a tree. Each element can have up-to 1 neighbor element and any number of children elements, these children are stored in a standard library List<T>. Each element also has a List<T>.Enumerator so that when I iterate through this container and come to the end of a branch and then back up it will remember which branch it was last down and go to the next branch next, if it exists. Now not all all elements have children, in fact most don't, so for those I don't even bother creating a List<T> instance and there for don't have anything to attach the Enumerator to.
Grain posted a topic in General and Gameplay ProgrammingIn C# how can I check if a List enumerator is valid before using it? Valid meaning it has been attached to a List already. Since enumerator is a struct it's not comparable to null so that option is out. The .Current property can be null even if the enumerator is valid, for example if MoveNext() has not yet been called for the first time or if the end of the list has been reached, so checking that isn't helpful in all cases either. I can call MoveNext() in a try/catch block, which would tell me if it's valid or not, but if the the enumerator IS valid I don't wan't to move to the next item just yet. I would wan't to check if the .Current property if not null and then work with that item before calling MoveNext(). And I don't really like catching exceptions as part of preforming basic logic anyway, they should be only for error handling.
- How about this then? public static Vector2 Project(this Vector2 A, Vector2 B) { float DotOverDot = Vector2.Dot(A, B) / Vector2.Dot(A, A); if (float.IsNaN(DotOverDot) || float.IsInfinity(DotOverDot)) return Vector2.Zero; else return Vector2.Multiply(A, DotOverDot); }
- What you are trying to do here is figure out how much of B is pointing in the A direction. This assumes that A is a direction vector. A direction vector is usually required to have a length of 1. Any vector that can be normalized can be a direction vector. However, a zero-length vector cannot be normalized and thus has no direction. If A was correctly normalized your project function simply return Vector2.Dot(A,B). If you should not be projecting zero-length vectors, it seems that there is a problem upstream of this function. Personally, I would get rid of this function and make sure that my direction vector was normalized and use the dot product directly. -Josh Normalizing requires a square root call which I'd like to avoid as much as possible, and seeing as I can project just fine while avoiding that I see no benefit in doing so. Also zero vectors are perfectly valid in some cases(a velocity vector for example).
- Also I'm projecting B onto A, as where this example projects A onto B.
- I'm not sure what you mean considering the result should be the same for all lengths of A with the sole exception 0.
Grain posted a topic in Math and PhysicsCurrently I'm doing this public static Vector2 Project(this Vector2 A, Vector2 B) { if (A.X != 0f || A.Y != 0) return Vector2.Multiply(A, Vector2.Dot(A, B) / Vector2.Dot(A, A)); else return A; //return B; ??? } Without the zero length check in there every thing blows up because this method returns an effectively infinite length vector. What is the best thing to return when the length of A is zero?
Grain replied to Grain's topic in General and Gameplay ProgrammingEither you haven't explicitly defined lifetimes, or you haven't explicitly defined ownership (which goes back to my earlier point, that "shared" is not a valid definition of ownership). If both are explicitly defined, then there cannot be any other code that refers to dead resources. And my point is that you should be doing this implementation in C++, too. Skating by with shared ownership semantics only takes you so far. Ok. Object 1 owns object 2. This is explicit. Object 3 has a reference to object 2 because it needs to work with some data it has. Object 1 dies, so it kills object 2 as well. This is also explicit. Now, we still need to inform object 3 that it's reference to object 2 is no longer valid.
|
https://www.gamedev.net/profile/58407-grain/
|
CC-MAIN-2017-30
|
en
|
refinedweb
|
Roman Suzi wrote: > P.S. Just look at the neighboor thread: > Subject: minidom toxml() not emitting attribute namespace qualifier I feel the need to point out a couple of things in relation to this bug in minidom. 1. The bug was fixed in minidom, but not in the base distribution. 2. Using namespaced attributes is a fairly sophisticated thing to do. I think that anyone whose usage of XML was advanced enough to use namespaced attributes would be very likely to have the excellent pyxml library installed, and thus would never see the bug. 3. The latter is evidenced by the fact that no-one seems to be clamouring to have the bug fixed in the base distribution. It's been there for at least 8 months, probably longer. But anyone whose use cases might trip-off the bug will most likely be using pyxml.minidom. -- alan kennedy ----------------------------------------------------- check http headers here: email alan:
|
https://mail.python.org/pipermail/python-list/2003-June/196503.html
|
CC-MAIN-2017-30
|
en
|
refinedweb
|
46
On Tue, 30 Sep 2003 grubert@... wrote:
> zope stx uses intending as a markup
> for titles, reST uses intending for quotes and underline for markup.
>
> what would be needed is a stx-reader or RTFM.
Well, what I need is a online tool which processes Zope-Book source
to well and reasonable formatted HTML (and other formats). If this
is not the intenion of docutils we should close this bug report.
Kind regards
Andreas.
Hello,
Debian optimistically plans a release of its next stable version (3.1,=20
"sarge") in December, which means that packages should be ready sometime ne=
xt=20
month.
At the moment, the python-docutils package there is from CVS because some=20
features people needed weren't available in 0.3. That's OK, but I wouldn't=
=20
want to have moving-target versions in a release because Debian policy is=20
pretty strict WRT updating. Therefore I'd like to get docutils 0.3.1 into=20
Debian soon (i.e., within one month or so).
Well, CVS says the version number's been 0.3.1 for a month now. ;-) What ne=
eds=20
to be done until there's an "official" 0.3.1 release? Is there anything I c=
an=20
do to help?
=2D-=20
Matthias Urlichs | {M:U} IT Design @ m-u-it.de | smurf@...=
org
Disclaimer: The quote was selected randomly. Really. |
net
- -
=46un things to do in an elevator:
Wear a puppet on your hand and talk to other passengers "through" it.
David Goodger <goodger@...> writes:
> David Abrahams wrote:
>> Is there a way to stick a single bold '*' character in a
>> parsed-literal?
>
> This works for me:
>
> . parsed-literal::
>
> foo ***** bar
Wow, I'd never have guessed! I guess I really don't understand the
parsing process.
> To do a single emphasized asterisk, you need to escape:
>
> . parsed-literal::
>
> foo *\** bar
>
> The reason is that "**" is found before "*", so without the
> escape there's no closing "**" and a parsing error results.
Thanks,
--
Dave Abrahams
Boost Consulting
David Abrahams wrote:
> Is there a way to stick a single bold '*' character in a
> parsed-literal?
This works for me:
.. parsed-literal::
foo ***** bar
To do a single emphasized asterisk, you need to escape:
.. parsed-literal::
foo *\** bar
The reason is that "**" is found before "*", so without the
escape there's no closing "**" and a parsing error results.
--
David Goodger
For hire:
Docutils:
(includes reStructuredText:)
David Abrahams <dave@...> writes:
>).
Aha. I found the answer, though I'm truly not sure why it works:
foo **\ *** bar
Seems to me that could just as well cause an empty bold region and an
un-emboldened '*'.
--
Dave Abrahams
Boost Consulting).
Many TIA,
--
Dave Abrahams
Boost Consulting
Beni Cherniavsky wrote:
>).
Sounds good. From the to-do: "Units of measure? (See docutils-users,
2003-03-02.)" Patches welcome.
> Sorry, I got confused and thought `TextElement` is too wide; it fits
> the bill precisely. I added a bit documentation about it, please
> verify I didn't say anything wrong.
Looks fine, thanks.
> Generally `doctree.txt` seems a bit confused about structural
> elements
I don't follow; can you provide examples?
> Also, each element is assigned one cathegory in the detailed listing
> but some belong to several (e.g. `field` is both a body subelement
> and a bibliographic element).
Patches welcome.
--
David Goodger
For hire:
Docutils:
(includes reStructuredText:)
On Sun, 21 Sep 2003, David Abrahams wrote:
>
> The enclosed demonstrates what I believe to be a bug
try cvs
--
BINGO: This left unindentionally unblank
--- Engelbert Gruber -------+
SSG Fintl,Gruber,Lassnig /
A6170 Zirl Innweg 5b /
Tel. ++43-5238-93535 ---+
This bug should be fixed now, in current CVS/snapshot:
<>.
-- David Goodger@...
************************************************************************
Receiving this email because you registered to receive special
offers from one of our partners. If you would prefer not to receive
future email, click:
************************************************************************
David Goodger wrote on 2003-09-18:
> Beni Cherniavsky wrote:
> >. ;-)
>
Oh, I see your attitude better now ;-). I misfixed it in my copy
because I actually needed it to run. Since I released my script, I
didn't like the thought somebody would download it and it would
completely fail because of the bug. But thinking of it again, I'm the
only user of my script at the moment, so you are completely right. I
will be more patient in the future...
--
Beni Cherniavsky <cben@...>
Beni Cherniavsky wrote:
> I'm trying to use inline raw latex math by making a raw
> substitution. It doesn't work, the backslashes are replaced by null
> characters and latex complains. Curiously, it only happens with
> substitutions::
That's because it's a bug/oversight. Bugs and oversights tend to be
exceptional.
> I think the null characters shouldn't even appear in the document
> tree after parsing.
Correct. Null characters are part of an internal parser process.
> I must confess I'm don't even begin to understand what goes on in
> the parser
The parser converts initial backslashes to null bytes with the
"escape2null" function when it looks for inline markup. The null
bytes are removed later in the process, after they've finished doing
their "escaping". The substitution definition parsing code is calling
"escape2null" too much. I'm close to a solution, but too tired to
finish up tonight.
>. ;-)
--
David Goodger
For hire:
Docutils:
(includes reStructuredText:)
Since nobody came up with a fix and my `mathhack.py` script depends on
it being fixed, I commited my wrong fix (mind you, it works, it's just
at the wrong place which can still be felt by using the
docutils-xml.py writer or quicktest.py). For the benefit of others,
here is a test for it being fixed in the parser. Since it still
*fails* (properly, I didn't fix the parser), I wasn't sure whether I
should commit it...
Index: test/test_parsers/test_rst/test_substitutions.py
===================================================================
RCS file: /cvsroot/docutils/docutils/test/test_parsers/test_rst/test_substitutions.py,v
retrieving revision 1.15
diff -u -r1.15 test_substitutions.py
--- test/test_parsers/test_rst/test_substitutions.py 27 Mar 2003 03:56:37 -0000 1.15
+++ test/test_parsers/test_rst/test_substitutions.py 18 Sep 2003 18:27:33 -0000
@@ -33,6 +33,21 @@
<image alt="symbol" uri="symbol.png">
"""],
["""\
+Raw substitution, backslashes should be preserved:
+
+.. |alpha| raw:: latex
+
+ $\\alpha$
+""",
+"""\
+<document source="test data">
+ <paragraph>
+ Raw substitution, backslashes should be preserved:
+ <substitution_definition name="alpha">
+ <raw format="latex" xml:
+ $\\alpha$
+"""],
+["""\
Embedded directive starts on the next line:
.. |symbol|
--
Beni Cherniavsky <cben@...>
I'm trying to use inline raw latex math by making a raw substitution.
It doesn't work, the backslashes are replaced by null characters and
latex complains. Curiously, it only happens with substitutions::
text\ |beta|\ text
.. |beta| raw:: latex
$\beta$
gives (passed through cat -A)::
<substitution_definition name="beta">
<raw format="latex" xml:
$^@beta$
The null characters appear in the document tree itself; the html
writer also suffers it (but the null characters are less problematic
for browsers than latex). I found `states.unescape` which I can
easily apply in the writers (patch below) but I think the null
characters shouldn't even appear in the document tree after parsing. I
must confess I'm don't even begin to understand what goes on in the
parser so I can't find the proper fix.
Index: docutils/writers/html4css1.py
===================================================================
RCS file: /cvsroot/docutils/docutils/docutils/writers/html4css1.py,v
retrieving revision 1.91
diff -u -r1.91 html4css1.py
--- docutils/writers/html4css1.py 1 Sep 2003 15:09:14 -0000
1.91
+++ docutils/writers/html4css1.py 17 Sep 2003 19:43:46 -0000
@@ -932,7 +932,9 @@
def visit_raw(self, node):
if node.get('format') == 'html':
- self.body.append(node.astext())
+ # @@@ Wrong fix!
+ from docutils.parsers.rst.states import unescape
+ self.body.append(unescape(node.astext(), restore_backslashes=1))
# Keep non-HTML raw text out of output:
raise nodes.SkipNode
Index: docutils/writers/latex2e.py
===================================================================
RCS file: /cvsroot/docutils/docutils/docutils/writers/latex2e.py,v
retrieving revision 1.50
diff -u -r1.50 latex2e.py
--- docutils/writers/latex2e.py 17 Sep 2003 17:41:36 -0000 1.50
+++ docutils/writers/latex2e.py 17 Sep 2003 19:43:46 -0000
@@ -1194,7 +1194,9 @@
def visit_raw(self, node):
if node.has_key('format') and node['format'].lower() == 'latex':
- self.body.append(node.astext())
+ # @@@ Wrong fix!
+ from docutils.parsers.rst.states import unescape
+ self.body.append(unescape(node.astext(), restore_backslashes=1))
raise nodes.SkipNode
def visit_reference(self, node):
Beni Cherniavsky wrote on 2003-09-17:
>?
>
Sorry, I got confused and thought `TextElement` is too wide; it fits
the bill precisely. I added a bit documentation about it, please
verify I didn't say anything wrong. Generally `doctree.txt` seems a
bit confused about structural elements - either the subsections
structure doesn't match cathegories or the cathegory lists should be
expanded to include the sub-cathegories (like with body elements).
Also, each element is assigned one cathegory in the detailed listing
but some belong to several (e.g. `field` is both a body subelement and
a bibliographic element).
I fixed inline images and implemented ``align`` and ``scale``
attributes on images in LaTeX. Attributes still not handled: ``alt``
(not applicable in LaTeX), ``height`` and ``width`` (pixels
meaningless in LaTeX, spec should probably be extended to allow sizes
in percents), ``class`` (not directly applicable in LaTeX, perhaps
some mapping to stylesheet hooks could be defined for all nodes
alike).
--
Beni Cherniavsky <cben@...>
Beni Cherniavsky wrote on 2003-09-14:
> I'm now working on implementing image attributes (scale, align, etc.)
> for the latex writer. I bumped into a couple of issues.
>
I bumped into a harder issue: distinguishing inline images from
stand-alone images. Normally the ``image`` directive creates an
`image` node as a body element, but with help of substitutions,
`image` also serves as an inline element. I need to tell these two
uses apart.
Currently the LaTeX writer always outputs newlines around images.
This turns each image into a separate paragraph and should be
inhibited for inline images. I could check the context of the image
node by managing context info in other visit/depart methods or
(probably simpler) inspecting the ``.parent`` chain. But I can't
figure out a simple criterion for telling apart body vs. inline
context.?
--
Beni Cherniavsky <cben@...>
Pierre-Yves Delens wrote:
> When creating mailto hrefs with docFactory (*),
What version of Docutils, DocFactory, ZRest?
Did you check with the latest versions of all? If not, please try.
>
> .. _`dethier@...`: mailto:dethier@...
Why are you doing so much unnecessary work? Just do::
>>
"@" is converted to "@" in an attempt to fool email harvesters.
You shouldn't see "@". If you do, it's a bug. Please send a
reproducible test procedure & data.
>.
Is "@" sent or "@"? If there's no ";", that would explain a lot.
Either way, it sounds like a Zope issue.
> What should I do, hopefully simple (I mean, without having to manually
> escape the '@' wehen authoring Restructured Text) ?
We could add an option to the HTML writer *not* to convert "@" to
"@". Patches welcome!
--
David Goodger
For hire:
Docutils:
(includes reStructuredText:)
Pierre-Yves Delens wrote:
> Bonjour,
Hi. Please send to docutils-develop, not docutils-develop-admin.
> My config, at my ISP is:
> unreleased version,
Of what?
> python 2.1.3, linux2
> (not yet Zope 2.7, nor 2.6.3)
>
> Viewing (in the EDIT view) a ZRest document in the ZMI is problematic
> because of accentuated charachters (loved in French !).
I don't know much about Zope (or anything starting with Z).
Perhaps someone on the list can help?
> Is there a solution for this ? at ZRest or at Zope level ? Is there a
> version issue (Pyrhon, Zope) ?
> Is there a solution while keeping with Python 2.1.3 compatible Zopes ?
>
> Thanks on forward
>
> ___________________________________________________
> P-Y Delens, Manager
--
David Goodger
For hire:
Docutils:
(includes reStructuredText:)
Bonjour,
When creating mailto hrefs with docFactory (*),
.. _`dethier@...`: mailto:dethier@....
What should I do, hopefully simple (I mean, without having to manually
escape the '@' wehen authoring Restructured Text) ?
Thanks on forward
(*) The HTML is produced without ZRest outside of Zope, for the while (due
to versioning problem of Zope server at our ISP's)
___________________________________________________
P-Y Delens, Manager
LIENTERFACES - PY Delens, sprl
Avenue Dolez, 243 - 1180 Bruxelles
phone : 32 2 375 55 62
mail : py.delens@...
web :
___________________________________________________
engelbert.gruber@... wrote on 2003-09-14:
> to sum up beni's requirements.
>
> 1. the title page should be standard latex:
>
Not a requirement. I just think that the default look-and-feel should
match LaTeX's when possible. At least for title and author.
> * easy if we only use title,author,date.
> * harder if some other fields are required.
>
> could be done by building our own (fake standard)
> titlepage supporting additional fields.
>
> additional fields displayed
>
> * like latexs date.
> * as docinfo table.
>
Another choice: what material should go on the title page and what on
the next page (for report/book document classes).
> 2. the titlepage should contain the university logo
>
> see fancyhdr documentation, and write a short docu
> for docutils. see tools/stylesheets/style.tex for a start.
>
How is `fancyhdr` connected? It would allow a logo on every page.
Currently I just redefined ``\maketitle`` in my stylesheet, with the
logo and hardcoded title/authors, as I wanted to position them...
> 3. a real issue for the latex writer is:
>
> the latex writers document structure is:
>
> * head prefix
> stylesheet is included here
> * title
> * head
> * pdf_info
> * body prefix
> with \begin{document}
> \maketitle
> * body
> * body suffix
>
> if the titlepage should be user modifyable we need a hook for the
> titlepage construction. or maybe redefine maketitle in the stylesheet.
>
I think redefining maketitle is best approach, it will scale to cases
where the user changes the titlepage completely. It also seems useful
to put the matter after \maketitle into another command that can be
redefined (``\aftertitle``).
> and need to set latex variables for docinfo entries.
>
> * title,author,date
> * docinfofulltable: with author and date
> * docinfotable: without author and date
> * maybe docinforevision, docinfo...
>
The main tradeoff is between a "pull" style (the user gives the
structure, order and layout, using variables we have set to fill in
the fields) and a "push" style (we give the structure, the user has
hooks to style parts of it). I think the "pull" style's flexibility
can't be matched, so I'd prefer it.
The problem is that generic fields can't be done using the pull style
(TeX has no normal list/dictionary data types to describe this ;-).
Only the finite set of known fields can be done so. Generic fields
would have to be done in the push style - the latex writer writes sort
of a table but the real formatting is delegated to commands defined by
the stylesheet.
Moreover, as the set of known fields grows, fields that used to be
generic become special and don't appear in the pull style unless the
user explicitly handles them. That's the typical issue with pulling
data - you lose anything not explicitly included. Solution: the user
uses special commands to enumerate the fields he has explicitly
handled. Perhaps the very use of a field in ``\maketitle`` should
disable the output of it as a generic field.
Also, we need to allow the stylesheet to detect which fields are
present (only from the known set).
Example of simple imaginary stylesheet::
\renewcommand{\maketitle}{
\begin{titlepage}
\begin{center}
\hfill
{\Huge \bibfieldtitle}
\hfill
\ifbibfieldauthor{\LARGE \bibfieldauthor}\fi
\hfill
\end{center}
\end{titlepage}
}
%
\showedbibfieldtitletrue % Could be omitted for title
\showedbibfieldauthortrue
\renewcommand{\genericbibfield}[2]{ % name, value
{\large #1: #2}
}
\renewcommand{\aftertitle}[1]{ % generic fields.
\vskip
\begin{center}
#1
\end{center}
}
And this doesn't completely handle single/multiple authors yet...
Probably this should be presented to the stylesheet as two fields
"author" vs. "authors" of which at most one is defined. For "authors"
we also probably need to repeat the "push" trick: the value of this
field would have hooks for styling the separation between authors.
It's easier to program python than TeX... Perhaps I'm loading too
much into the tex stylesheet and some of this should be off-loaded
to python. empy comes to mind...
--
Beni Cherniavsky <cben@...>
David Goodger wrote on 2003-09-15:
> Beni Cherniavsky wrote:
>
> > The output of ``rst2latex.py --help`` is almost
> > valid reStructuredText. Would be much simpler to run it and include
> > the results in latex.txt automatically (the same could also be done
> > with other tools).
>
> If documentation can be had by running ``rst2latex.py --help``,
> there's no need to include that documentation in static docs.
> Instead, write "run ``rst2latex.py --help`` to see the command-line
> options" in the static docs. That's what's written in
> <>.
>
Agreed. In general I think it's a useful goal that all command-line
help of docutils tools is correct RST.
> >.
>
> Are you using the latest code? I recently fixed some bugs in
> Optik/optparse.py (latest optparse.py included in the Docutils
> snapshot). Note that the optparse.py included with Python 2.3 has
> these bugs; manual installation may be necessary to override the
> stdlib module.
>
Manually installed => blank lines are fine now. Will this go into
Python 2.3.1 or should installation of `optparse.py` be re-enabled for
2.
>
> I'm in the process of fixing the parser & spec, so placeholders in
> angle brackets can contain anything but angle brackets.
>
Yes, this covers it all::
buildhtml.py --help | sed -e 's/<[^>]*>/PLACEHOLDER/g' | docutils-xml.py - /dev/null
runs cleanly. Also checked for other tools. Only pep2html.py had
problems - fixed and commited.
--
Beni Cherniavsky <cben@...>
On Mon, 15 Sep 2003, David Goodger wrote:
> Beni Cherniavsky wrote:
> > BTW, what's the point of manually maintaining the options table in
> > ``docs/latex.txt``?
>
> If you're referring to the "Options on the Commandline" section, I'd
> say there's no point. It shouldn't be there at all.
removed
> The section at
> <> should be
> completed in the style of the surrounding text.
did (or tried to, now the commandline option documentation is in
config.txt). _`attribution` collides with the same option from html4css1
--
BINGO: Einpflegen
--- Engelbert Gruber -------+
SSG Fintl,Gruber,Lassnig /
A6170 Zirl Innweg 5b /
Tel. ++43-5238-93535 ---+
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
|
https://sourceforge.net/p/docutils/mailman/docutils-develop/?viewmonth=200309
|
CC-MAIN-2017-30
|
en
|
refinedweb
|
SSIGNAL(3) Library Functions Manual SSIGNAL(3)
NAME
ssignal, gsignal - software signals
SYNOPSIS
#include <<signal.h>>
int (*ssignal (sig, action))()
int sig, (*action)();
int gsignal (sig)
int sig;
DESCRIPTION
ssignal() and ssignal() implement a software facility similar to sig-
nal(3V).
Software signals made available to users are associated with integers
in the inclusive range 1 through 15. A call to ssignal() associates a
procedure, action, with the software signal sig; the software signal,
sig, is raised by a call to). ssignal() returns the action previously established for that
signal type; if no action has been established or the signal number is
illegal, ssignal() returns SIG_DFL.
ssignal() raises the signal identified by its argument, sig:
If an action function has been established for sig, then that
action is reset to SIG_DFL and the action function is entered with
argument sig. ssignal() returns the value returned to it by the
action function.
If the action for sig is SIG_IGN, ssignal() returns the value 1
and takes no other action.
If the action for sig is SIG_DFL, ssignal() returns the value 0
and takes no other action.
If sig has an illegal value or no action was ever specified for
sig, ssignal() returns the value 0 and takes no other action.
SEE ALSO
signal(3V)
6 October 1987 SSIGNAL(3)
|
http://modman.unixdev.net/?sektion=3&page=gsignal&manpath=SunOS-4.1.3
|
CC-MAIN-2017-30
|
en
|
refinedweb
|
's phyloxml branch in GitHub. If you're interested in testing this code before it's been merged into Biopython, follow the instructions there to create your own fork, or just clone the phyloxml branch onto your machine.
Requirements:
- Biopython 1.51 Bio.TreeIO.PhyloXMLIO module attempts to import each of these compatible ElementTree implementations until it succeeds. The given XML file handle is then parsed incrementally to instantiate an object hierarchy containing the relevant phylogenetic information.
To draw trees (optional), you'll also need these packages:
- NetworkX 1.0rc1 (or 0.36 for snapshot at the end of GSoC 2009)
- PyGraphviz 0.99.1
- matplotlib
The I/O and tree-manipulation functionality will work without them; they're imported on demand when the functions to_networkx() and draw_graphviz() are called.
About the format Phyloxml objects produced by Bio.TreeIO.PhyloXMLIO.read().
For example, this XML:
<phyloxml> '), ])) ])
which represents a phylogeny like this:
.102 _______A .06 | ______| | | .23 | |______B _| | | .4 |____________C
The tree objects are derived from base classes in Bio.Tree; see that page for more about this object representation. = PhyloXMLIO -- this is exactly the same as calling.
>>> slicing and multiple -- to_seqrecord() and from_seqrecord(). This includes the molecular sequence (mol_seq) as a Seq object, and the protein domain architecture as list of SeqFeature objects. Likewise, PhyloXML.ProteinDomain objects have a to_seqfeature() method.
Example pipeline
The code for most of the following steps is left to the reader as an exercise.
1. Grab a protein sequence -- see SeqIO.
from Bio import SeqIO # ...
2. Identify homologs using Blast.
from Bio.Blast import NBCIStandalone # ...
3. Build a tree, using a separate application.
from Bio.Align.Applications import ClustalwCommandLine # ...
4. Add annotation data -- now we're using Tree and TreeIO.
from Bio.Tree import PhyloXML # ...
5. Save a phyloXML file.
from Bio import TreeIO TreeIO.write(tree, 'my_example.xml', 'phyloxml')
Related.
|
http://biopython.org/w/index.php?title=PhyloXML&oldid=2892
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
Hi,
Is there anyway that I can access all the properties of a control(activex or normal) during runtime and get their values. I dont want to speicify the property name. I should get the all the property names and their values. It is something like reflection namespace in VB.NEt. Is it posiible in vb6.0
Thanx
Not that I'm aware of... I've only ever tried it in .NET... so I don't know for sure if there is a way in VB6 (never had a reason
You'll have to make use of the TypeLib Information Library in tblinf32.dll
All my Articles
Hannes
Thank you very much. Tlbinf32.dll is the right dll. I got what I need. Once again Thank you..
That is good news Well done!
Please mark your thread Resolved. You can do this by clicking on Thread Tools ( above your first post ), and then selecting Mark Thread Resolved.
Thank you,
Hannes
View Tag Cloud
Forum Rules
|
http://forums.codeguru.com/showthread.php?499128-RESOLVED-Accessing-properties-of-controls
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
csJoystickEventHelper Class Reference
[Event handling]
Helper class to conveniently deal with joystick events. More...
#include <csutil/event.h>
Detailed Description
Helper class to conveniently deal with joystick events.
Definition at line 188 of file event.h.
Member Function Documentation
retrieve any axis (basis 0) value
retrieve button number
Retrieve current button mask.
retrieve button state (pressed/released)
Retrieve event data.
retrieve number of axes
Retrieve joystick number (0, 1, 2, ...).
Create new joystick event.
Create new joystick event.
The documentation for this class was generated from the following file:
Generated for Crystal Space 2.0 by doxygen 1.6.1
|
http://www.crystalspace3d.org/docs/online/api-2.0/classcsJoystickEventHelper.html
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
.
Hi Brian,
it would be cool if we would know when all components are in, or just a page where missing components are mentioned (stroke through if they are implemented)
I would like to test with already existing app to see which issues arise. I already tested (with components should be in pr2) and the viewport isn't rendered at all, difficult to say why, as there is no js error at all.
Maybe it's worth to throw exceptions if a component / class is called that is not present, would make the debug much easier.
Ext.reg Exception Helper
Ext.reg Exception Helper
I know that Ext.reg has been deprecated, but it would be nice to update the exception to indicate which xtype is using Ext.reg.
Code:
reg: function(xtype, cls) { if (Ext.isDefined(window.console)) { console.warn('Using deprecated Ext.reg for the ' + xtype + ' xtype. Please use the alias configuration with the widget. namespace.'); } Manager.setAlias(cls, "widget." + xtype); }
Last edited by chrisvensko; 27 Feb 2011 at 1:29 PM. Reason: Cleanup Code Block
'ref' config option not recognized in Ext4
'ref' config option not recognized in Ext4!
I am getting an error message described above: 'Using deprecated Ext.reg. Please use the alias configuration with the widget. namespace.' Is there any documentation on how to use aliases in Ext JS 4?
Thank you,
Michael
For this specific issue, it means to switch your class definition to something like this:
Code:
Ext.define('MyCustomClass', { extend: 'Ext.SomeOtherClass', alias: 'widget.mycustomclass', // <-- your custom xtype ... });
For a plugin replace 'widget.' with 'plugin.')
|
http://www.sencha.com/forum/showthread.php?124015-Ext-3-to-4-Migration&p=575426&viewfull=1
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
Web.VKHS
Description
VKHS
-
VKHS is written in Haskell and provides access to Vkontakte social network, popular mainly in Russia. Library can be used to login into the network as a standalone application (OAuth implicit flow as they call it). Interaction with user is not required. For now, vkhs offers limited error detection and no captcha support.
Following example illustrates basic usage (please fill client_id, email and password with correct values):
import Web.VKHS.Login import Web.VKHS.API main = do let client_id = "111111" let e = env client_id "user@example.com" "password" [Photos,Audio,Groups] (Right at) <- login e let user_of_interest = "222222" (Right ans) <- api e at "users.get" [ ("uids",user_of_interest) , ("fields","first_name,last_name,nickname,screen_name") , ("name_case","nom") ] putStrLn ans
client_id is an application identifier, provided by vk.com. Users receive it after registering their applications after SMS confirmation. Registration form is located here.
Internally, library uses small curl-based HTTP automata and tagsoup for jumping over relocations and submitting various 'Yes I agree' forms. Curl .so library is required for vkhs to work. I am using curl-7.26.0 on my system.
Debugging
-
To authenticate the user, vkhs acts like a browser: it analyzes html but fills all forms by itself instead of displaying pages. Of cause, would vk.com change html design, things stop working.
To deal with that potential problem, I've included some debugging facilities: changing:
writing
(Right at) <- login e { verbose = Debug }
will trigger curl output plus html dumping to the current directory. Please, mail those .html to me if problem appears.
Limitations
-
- Ignores 'Invalid password' answers
- Captchas are treated as errors
- Implicit-flow authentication, see documentation in Russian for details
- Probably, low speed due to restarting curl session on every request. But anyway, vk.com limits request rate to 3 per second.
Documentation
module Web.VKHS.Login
module Web.VKHS.Types
module Web.VKHS.API
|
http://hackage.haskell.org/package/VKHS-0.3.3/docs/Web-VKHS.html
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
>> answer?
japns did not fall in love with, they are just being made abjectly submissive to the conqueror.
.
like breaking or domesticing an animal, you'd have to whip them and give them goody bags to train them into submission.
.
that's why people should treat their defeated but unrepentant war time war crime awashed enemy by taking no prisoner but instead treat them as second class people, then and only then they will 'fall in love' with you, but watch out your back in the middle of sleep always, pearl harbour may be just around the corner..
The addition to the globally available Chinese labour force is not by any means over, as people migrate from the land,(where their income is very low) to second and third tier cities,Chinese incomes will continue to rise driven by urbanisation as well as productivity growth.
As productivity growth remains high (17% in the private sector a couple of years ago) incomes will continue to rise. Infrastucture is likely to continue to be better than in a lot of South and South East Asia.
The jobs which will go to Bangladesh and Vietnam will be the very low paid ones which require only a sewing machine (or similar) as capital investment. Industries like Autos and electronics which have complex supply chains and a need for scale will continue to grow in China. China will still have a labour force of nearly a billion. The only real labour shortage will be for very low paying jobs which will either move into the Chinese countryside or to the Indian subcontinent (or SE Asia).
This does not mean the Chinese economy will collapse or become uncompetitive, only that development will continue and really low wage jobs will migrate. China will really only have a serious problem if productivity growth stagnates (unlikely until the economy becomes a lot more urbanised)or investment ceases to make a decent return (which may happen without further reforms).
By the year 2020 twenty million Chinese men will find shortage of women for marriage. They are already trying to import girls from such countries as Myammar illegaly, it will not help much. The situation will further worsen the number of workers availabe after say 40 years, while rich young boys and girls are immigrating to abroad at a rate of 70 thousand per month!!.
but however you twist and cut it, it does not change the fact reminding the world that japan is still a vassal slave state with foreign troops and bases stationed all over japan. japanese are still treated as second class citizens at best in their own country by the occupying troops.
.
china may or may not have labour shortage problems in the future, but in order for japan to be independent and free, it must let ryukyus islands to be indepedent and free first.
.
only letting ryukyus will asian nations china, india and korea will land a hand to rescue japan with money loans and more trades..
Well, this article is a bit overreacting. China's working population has been abnormally high and now it is time to shrink. Why is it such a big deal for a country with already more than 1.3 billion people? Look on the bright side. I can say that population decline is a postive thing since each worker can become more efficient than before..)
"Never before has the global economy benefited from such a large addition of human energy."
Stopped reading right there. The author shows an extreme lack of understanding on any real "benefit" to the world economy that the hollowing out of the US and EU's industrial labor force at the expense of near slave wage labor from China.
I should have known better than to expect anything of real substance about China from The Economist, or most western based rags in the first place.
Michael Pettis is a much more reliable source of info on what's actually going on in China. a good news. However, I had to say this is a exam for China, examing the pension system and health system. Please also focus that there are two people affording four elder people's living. If somebody realized Japan deeply, he/she will notice that the elder become a major social problem, This is also one reason of slow increase of Japan Economic.
It is not the number of working people that matters but the quality of the work force that does. Only Indian nationalists, like the Rajs of the old, will talk endlessly about their country's "population dividend" as if half a billion uneducated people stuck in poverty is going build the next superpower.
A slowly declining labor pool at this point is good news. China is already suffering serous environmental consequences of break neck industrialization sustained by the need to maintain social stability through job creation. The decreasing number of youth entering the labor force will give the policy makers the needed slack to transition the economy away from the present mix of manufacturing/infrastructural/housing into a more sustainable mix. As long as the productivity increases, the economy can keep on growing.
On the flip side of this is the long term 4-2-1 problem regarding retirement and pension. I feel the policy response to this is to abolish the 1 child policy or at least change it to a regional quota. Provinces like Tibet should have no limit on the number of children regardless of ethnicity while coastal provinces could have a ceiling of 2 kids per family..
This is the whole point of the 1-child policy isn't it? Allowing each family to pool all their resources into 1 child's education?
I think Chinese workers' productivity have far more impact on the world's economy than the total number in the labor force, a few hundred million competing knowledge workers is not going to be nearly as beneficial to western economies as a hundred million low cost labor.
The point of the one child policy was a fear of over-population. I was going to say some long-winded thing about productivity, but, in a nutshell, it really all depends.
I also agree that this (NBS data) shrinking of 3.45 m out of about 1 billion labor force in china is good news.
.
as chinese government is re-prioritising its industry with more emphasis on education, greenhouse gas emission reduction and productivity, chinese workers will be more educated, efficient, productive, healthier and richer. each worker decades later can support far more retirees and disabled than s/he can today.
.
chinese population will grow older, as every nation does with a growing economy, but they will also be helthier, richer and more prosperous.
.
this 'getting old before getting rich' nonsense is maliciouly intended, no chinese in his/her right mind should buy that.
China's high proportional growth rates are due to coming from a very low GDP/ capita base. In absolute terms (additional GDP/ capita in dollar terms), China has grown less quickly in the past decade than many other countries.
.
E.g. while in PPP 2005 dollars, China added $4,550 to per capita GDP between 2001 and 2011, absolute growth was higher in Poland, Slovakia, Czech Republic, Turkey, Estonia, Lithuania, Latvia, Sweden, Russia and South Korea.
.
---------------------
While it is true that China retains plenty of space for productivity catch up & rising incomes, and this will allow China's elderly to enjoy higher living standards than today, it's important to realise that China's current proportional growth rate will fall in the coming decades (absolute growth in per person incomes will not rise much above East European levels for the foreseeable future).
.
Already, scarcity of cheap labour, scarcity of land, scarcity of fresh water and scarce capacity of the environment to endure industrial activity is constraining potential for further volume0based investment and growth - China's economy has to pivot & reform, rather than simply scaling up. As that happens, investment returns will diminish and rates of capital accumulation will fall.
.
China will have a much richer future to look forward to, and will make a great contribution to the world. But it's important to keep everything in perspective.
|
http://www.economist.com/comment/1851971
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
A guide to C++ for C programmers
Editor’s Note: A bare bones guide to the C++ language for C programmers, excerpted from Software engineering for embedded systems by Mark Kraeling.
There are a number of reasons developers may want to consider using C++ as the programming language of choice when developing for an embedded device. C++ does compare with C in terms of syntactical similarities, in addition to memory allocation, code reuse and other features. There are also reasons to take caution when considering C++ and its related toolsets.
One reason is that functionality and performance vary across compilers due to differing implementations of the standard by individual vendors and open-source offerings. In addition, C++ and its libraries tend to be much larger and more complex than their C language counterparts. As such, there tends to be a bit of ambiguity in the community around C++ as a viable option for embedded computing, and more specifically what features of the language are conducive to embedded computing and what features should generally be avoided.
When characterizing the cost of various aspects of using C++ for embedded software development, we characterize cost as something that requires runtime resources. Such resources may be additional stack or heap space, additional computational overhead, additional code size or library size, etc. When something can be done offline a priori by the compiler, assembler, linker or loader, we consider those features to be inexpensive and in some cases absolutely free.
As behaviors differ across compilers and vendors, the burden is ultimately placed on the developer and designer to ensure that said benefits are actually achieved with a given development environment for the target architecture. Lastly, development tools change over time as new functionality is added, features are deprecated, performance is tuned and so forth.
Development tools are highly complex interdependent software systems, and as such there may periodically be regressions in performance of legacy software as tools evolve. A periodic re-evaluation of features and performance is encouraged. The topics discussed in this section are furthermore presented as general trends, and not meant to be an absolute for any specific target or toolset implementation.
Relatively inexpensive features of C++ for embedded
In the following section I detail C++ language features that are typically handled automatically by the compiler, assembler, linker and/or loader effectively free. That is to say they typically will not incur additional computational or storage overhead at run-time, or increase code size.
Static constants. C++ allows users to specify static constants in their code rather than use C-style macros. Consider the example below:
C language:
#define DRIVE_SHAFT_RPM_LIMITER 1000
C++ language:
const int DRIVE_SHAFT_RPM_LIMITER 5 1000
Developers may take pause in that the C++ language implementation will require additional storage space for the variable DRIVE_SHAFT_RPM_LIMITER. It is the case, however, that if the address of said variable is not used within the code and rather the literal value 1000 is used in computation, the compiler will fold in the value as a constant at compilation time, thus eliminating the storage overhead.
Ordering of declarations and statements. In the C programming language, programmers are required to use a specific sequence whereby blocks start with declarations followed by statements. C++ lifts this restriction, allowing declarations to be mixed in with statements in the code. While this is mostly a syntactical convenience, developers should also use caution regarding the effect on the readability and maintainability of their code.
Function overloading. Function overloading pertains to the naming conventions used for functions, and the compiler’s ability to resolve at compile time which version of a function to use at the call site. By differentiating between various function signatures, the compiler is able to disambiguate and insert the proper call to the correct version of the function at the call site. From a run-time perspective, there is no difference.
Usage of namespaces. Leveraging code reuse has the obvious benefits of improving reliability and reducing engineering overhead, and is certainly one promise of C++. Reuse of code, especially in the context of large software productions, often comes with the challenge of namespace collisions between C language functions depending on how diligent past developers have been with naming convention best practices. C++’s classes help to avoid some of these collisions, but not everything can be constructed as a class (see previously); furthermore existing C language libraries must still be accommodated in many production systems.
C++’s namespaces resolve much of this problem. Any variables within the code are resolved to a given namespace, if nothing else the global namespace. There should be no penalty in using these name spaces for organizational advantage.
Usage constructors and destructors. C++ adds the functional of “new” and “delete” operators for provisioning and initializing heap-based objects. It is functionally equivalent to using malloc and initialization in C, but has the added benefit of being easier to use and less prone to errors in a multi-step allocation and initialization process.
C++’s “delete” functionality is also similar to “free” in C; however, there may be run-time overhead associated with it. In the case of C, structs are not typically destructed like objects in C++. Default destructors in C++ should be empty, however. One caveat with new/delete is that certain destructors may throw run-time exceptions which would in turn incur overhead. Run-time exceptions are described in more detail in the following subsections.
Modestly expensive features of C++ for embedded
The following groups of features do not necessarily need to impact the program run-time versus their C programming counterparts, but in practice they may have an effect depending on maturity and robustness of the compiler and related tools.
Inlining of functions. The subject of inlining functions for C++ is a very broad one, with far-reaching performance impacts ranging from run-time performance to code size and beyond. When designating a function to be inlined, typically the “inline” keyword is used.
Some compilers will take this as a hint, while others will enforce the behavior. There may be other pragmas available within a given toolset for performing this action in a forceable manner, and documentation should be revisited accordingly. One of the costs associated with function inlining is naturally growth in code size, as, rather than invoke the function via a call site at run-time, the compiler inserts a copy of the function body directly where the call site originally was.
Additionally, there may be performance impacts due to challenges in register allocation across procedure boundaries or increase in register pressure within the calling function. It is advised to closely consider the impact of inlining for your target when using C++.
Constructors, destructors and data type conversions. If a developer does not provide constructors and destructors for a given C++ class, the compiler will automatically provision for them. It is true that these default constructors and destructors may not ever be required; moreover the developer may have explicitly omitted them, as they were not required.
Dead-code elimination optimizations will likely remove these unused constructors and destructors, but care should be taken to ensure this is in fact the case. One should also take caution when doing various copy operations and conversion operations: for example, passing a parameter to a member function by value, in which a copy of the value must be created and passed using the stack. Such scenarios may inadvertently lead to invocation of constructors for the value being copied, which subsequently cannot be removed by dead-code elimination further on in the compilation process.
Use of C++ templates. The use of templates within C++ code for embedded systems should come with no overhead, as in principle all of the work is done ahead of time by the build tools in instantiating the right templates based on source code requirements. The parameterized templates themselves are converted into non-parameterized code by the time it is consumed by the assembler. In practice, however, there have been cases of compilers that behave in an overly conservative (or aggressive, depending on your view point) manner, and instantiate more template permutations than were required by the program. Ideally dead-code elimination would prune these out, but that has been shown to not always be the case on some earlier C++ compilers.
Most Read
Most Commented
Currently no items
|
http://www.embedded.com/design/programming-languages-and-tools/4424383/A-guide-to-C--for-C-programmers
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
09 August 2011 09:07 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
The plant’s current run rate is 80%, with a strong likelihood of further decline in output given disruptions to feedstock supply following recent fires at the Mailiao petrochemical complex, the source said.
All its output will be for captive use, he said.
FPCC supplies the bulk of its ECH produce to the epoxy resins plant of FPCC’s sister firm, Nan Ya Plastics, while a small quantity is shipped out, the source said.
The ECH plant is due for a month-long turnaround in September.
FPCC is expected to submit a detailed plan to the Taiwanese government on its plan to do safety checks at the Mailiao complex on 10 August, company sources said. This follows a string of fire incidents that led to the resignation of the company’s chairman and CEO early this month.
Additional reporting
|
http://www.icis.com/Articles/2011/08/09/9483515/taiwans-fpcc-halts-ech-exports-from-mailiao-plant.html
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
can you explain it in english please? if you know what i mean.. :)
can you explain it in english please? if you know what i mean.. :)
edited it. the output if i run the code is there. i added comments in between the codes to let you know what i did.
Hi, my prof. asks us to write a conversion program in from binary to decimal, decimal to binary, etc.. It should have a menu system and a method should validate if the user input is number and is in...
i made some changes on this one and has no errors on it.
import java.util.Scanner;
public class assignmentA
{
public static void main(String[] args){
int[] newArray;
what will i do so that it will check the element that the user's input and make sure it's not less than 4?
ok i changed some of the parts to
import java.util.scanner;
public class assignmentA
{
public static void main(String[] args){
int[] newArray=new int[5];
how can i use an array in an if statements then? kinda confused when i put them together. its not state in the toturials.
i tried the if statement.
import java.util.scanner;
public class assignmentA
{
public static void main(String[] args){
int[] newArray=new int[5];
reading it now. in this case, what would you use? while loop or if-else statement?
im trying to to check all the elements that the users will be inputting if it is less than 4. if it is then it must show this:
System.out.println("The number "+ newArray +" is less than 4!");
yes. i will be changing that in a bit. the name of the file should be the same with the class name.
do you know how to fix that one? it says i need to make the the number that user inputted so...
technically, i only have 2 errors since the 1st is easy to fix. anyone knows how to fix the remaining two?
ok i have checked again and the errors says.
Main.java:3: class assignmentA is public, should be declared in a file named assignmentA.java
public class assignmentA
^
Main.java:10:...
i compiled it in an online compile called Codepad and the error only says
error: 'import' does not name a type
Hi my professor wants me to make a program this this set of guidline.
i don't know if my code is correct and have any errors, please if you see any, try to tell me what it is and how to fix...
import java.util.Scanner;
public class assignmentB
{
public int calc(){
int number1;
int number2;
int number3;
int sum;
this is the program that i need to do
You are required to write a command line program in Java that declares three numbers.
Write a method called calc() that takes in the three numbers as...
|
http://www.javaprogrammingforums.com/search.php?s=3d7f579a8236a9e9e925c19aa830bdeb&searchid=1203383
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
Issues
ZF-3995: .
Posted by Matthew Weier O'Phinney (matthew) on 2008-08-22T14:54:57.000+0000
This is definitely a documentation issue, and I'm scheduling to fix it for RC3.
Posted by Thomas Weidner (thomas) on 2008-08-24T14:58:29.000+0000
Assigned right component
Posted by Matthew Weier O'Phinney (matthew) on 2008-08-24T15:47:00.000+0000
Fixed in trunk and 1.6 release branch
Posted by Wil Sinclair (wil) on 2008-09-02T10:39:20.000+0000
Updating for the 1.6.0 release.
Posted by Ota Mares (ota) on 2008-09-12T04:05:30.000+0000
The method still makes no sense in the final 1.6.0 release.
Posted by Ota Mares (ota) on 2008-09-12T04:10:09.000+0000
Reopened because the method still makes no sense in the final 1.6.0 release. The description of the bugreport still applys.
First of what the hell is $context? Where does it come from? And why should it have input and id keys? And as reported the value of the $value parameter will be overwritten by the $context parameter input key entry.
Posted by Matthew Weier O'Phinney (matthew) on 2008-09-12T05:47:25.000+0000
Validators only need a value, but can also take an optional $context parameter; typically, this will be the set of values being validated, such as $_POST or $_GET. In Zend_Form, we pass the entire set of values being validated in the form to the $context parameter.
$context is used to provide, well, context to the validator. In the case of a captcha, there are usually multiple values in the dataset that are used to identify and validate it: the "id" field is used so that Zend_Captcha knows which session namespace to look for the token in, and the "input" field is the actual user input that is being tested.
While the logic may make no sense to you, it makes sense to those who have developed it, and, more importantly, it simply works.
Closing the ticket again. Please do not re-open.
Posted by Ota Mares (ota) on 2008-09-12T06:02:29.000+0000
Sorry but are you kidding me? There are people who do not use Zend_Form at all.
Did you have ever looked at the method? You have to provide a context parameter, else the method tells you that it is missing the input or id key and the validation fails. So when you have NO context it is not possible to validate the input.
Besides that why do you have to provide the first parameter $value if it gets overwritten in any case by the value of the context parameter, see line 331 of Zend_Captcha_Word.
So, please make the method usable without the use of Zend_Form and its Zend_Captcha Element.
Posted by Benjamin Eberlei (beberlei) on 2008-09-12T06:06:54.000+0000
in line 330 the content of $value is always overwritten by the context. you cant do anything about it :-)
Posted by Benjamin Eberlei (beberlei) on 2008-09-12T06:11:07.000+0000
additionally $context is a mandatory parameter, if its not set the function returns false, line 326 to 329.
Posted by Matthew Weier O'Phinney (matthew) on 2008-09-12T10:33:47.000+0000
I think I understand the issue.
The solution is to assume the value provided is an array, and contains both id and input elements within; that way, $context is not necessary.
Scheduling for next mini release (which, due to code freeze for 1.6.1, means 1.6.2).
Please note: this is NOT a show stopper. You can simply pass the context array when not using Zend_Form.
Posted by Ota Mares (ota) on 2008-09-12T11:14:04.000+0000
{quote}The solution is to assume the value provided is an array, and contains both id and input elements within; that way, $context is not necessary.{quote} Why not simply check if the $context is null and skip the checks because they are not needed when not using Zend_Form? Beside that why not even remove these checks completly and move them to Zend_Form.
{quote}Please note: this is NOT a show stopper. You can simply pass the context array when not using Zend_Form.{quote} How is this not a "showstopper"? Its nowhere documentated and it says nowhere how that array should be nested with what elements. Besides that the method looks unlogical in the first moment when you do not know about the senseless relation to Zend_Form.
Passing the context array to the method is in no way logic. I guess normal user will fall into dispair when trying to use Zend_Image_Captcha.
Posted by Matthew Weier O'Phinney (matthew) on 2008-11-24T09:29:39.000+0000
isValid() updated in r12803 in trunk and r12805 in 1.7 release branch.
|
http://framework.zend.com/issues/browse/ZF-3995?focusedCommentId=23601&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
How can I call environment variables?
environment variables windows 10
environment variables linux
how to set environment variable in linux permanently
bash environment variables
bash script environment variables
environment variables python
environment variables node
in the .env file I set some variables //file .env
NAME="Darwin"
But when you call it in a js file, I get an undefined value //file index.js
console.log(process.env.NAME)
and when executing node index.js
undefined
Can you explain me why this happens, Thanks.
If you include the dotenv dependancy you won't have this problem
Then simply add at the top of your script
import dotenv from 'dotenv'; dotenv.config();
(if you have es6 via babel / createreactapp etc)
or
require('dotenv').config();
How To Read and Set Environmental and Shell Variables on a , In Linux systems, environmental and shell variables are used to like we did by calling the bash command from the terminal, a non-login shell By default it is taking.env from the project working directory, but if you have created the.env file with different name than you need to specify the path. After that you can console your env variable inside the code anywhere. answered Dec 29 '18 at 19:01 radhey shyam radhey shyam
When you create
.env file, for that time it is just a file like other js or text files, to use it we need to use
dotenv npm package.
Ref:
We need to use it on the top of our initial node project file.
i.e
index.js
Syntax For ES6/Type Script (With babel transpilation)
import dotenv from 'dotenv'; dotenv.config();
or
dotenv.config({path: `<path of .evn file>`});
Syntax for ES5 node js environment
require('dotenv').config();
or
require('dotenv').config({path: <path of .env file>});
By default it is taking .env from the project working directory, but if you have created the .env file with different name than you need to specify the path.
After that you can console your env variable inside the code anywhere.
Environment Variables, Setting Environment Variables. This section teaches you how to set environment variables, check if a particular environment variable exists, and display a list of set Please enter environment variable name: datatype Please enter environment variable value: CSV datatype=CSV environment variable has been set. If the environment variable already exists, it will be overwritten by the new value. The environment variable will be set only for the current session of the Python interpreter. If you want to change to be permanent, then you will have to edit the user profile file in the Python program.
Import dotenv npm module.
const dotenv = require('dotenv').config(); console.log(process.env.NAME)
How To Set Environment Variables, Learn why environment variables are useful and how to set them on me an email to: dkundel@twilio.com or contact me on Twitter @dkundel. Through PowerShell To read an Environment Variable in PowerShell, simply use the prefix “ $env: “, follow by the name of the variable. What’s better is that this prefix env is actually a drive so you can use a simple cmdlet such as dir to find and list all Environment Variables at once.
Environment Variables for Java Applications, Environment variables are useful to store system-wide values, for examples,. PATH : the most frequently-used environment variable, which stores a list of.
Environment variable, This procedure gives the calling program control over the environment of the called program. In Unix and Unix-like systems, the names of.
How to Set and List Environment Variables in Linux, In Linux and Unix based systems environment variables are a set of dynamic named values, stored within the system that are used by Create React App supports custom environment variables without the need to install any other packages. There are two methods of adding environment variables: Through the shell (temporary, lasts as
|
http://thetopsites.net/article/53964616.shtml
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
Creating a program that searches if a word is repeated and how many times it is in a list,
count the number of times a word appears in a text file python
python program to count the occurrences of a word in a text file
count the number of occurrences of each word in python
count the number of times each word appears in a text file python
generating a count of word occurrences
python count occurrences of all items in list
count occurrences of each element in list python
So far I have created this:
a = str(input("Word 1 = ")) b = str(input("Word 2 = ")) c = str(input("Word 3 = ")) d = str(input("Word 4 = ")) e = str(input("Word 5 = ")) f = str(input("Word 6 = ")) words = [a, b, c, d, e, f] def count_words(words): for i in words: wordscount = {i:words.count(i)} print(wordscount) count_words(words)
And this is what comes out:
{'word': 6} {'word': 6} {'word': 6} {'word': 6} {'word': 6} {'word': 6}
And my question is how can I make it so it doesn't print the key in the list if it already has so for example not the above but this:
{'word': 6}
You should slice the array and check if the word you're going to print hasn't already been checked.
def count_words(words): for index, i in enumerate(words): wordscount = {i:words.count(i)} if i not in words[0:index]: print(wordscount)
See also that I used
enumerate() to keep track of the index inside your loop.
Python, Many times it is required to count the occurrence of each word in a text file Note : Make sure the text file is in same directory as the Python file. for key in list (d. keys()): Python program to Count the Number of occurrences of a key-value Find the first repeated word in a string in Python using Dictionary� I need to write a program that asks the user for some text and then checks how many times the word "owl" is repeated in there. The program should also count the word if it's included in another one. Ex "hellowl" should return; The word "owl" was repeated 1 time.
First of all, welcome to Stack Overflow!
A solution to your problem would involve initialising another list, probably called
words_mentioned above your loop, and adding to it the words that you've already printed. If the word is in
words_mentioned, don't print it. Final code would look like this:
a = str(input("Word 1 = ")) b = str(input("Word 2 = ")) c = str(input("Word 3 = ")) d = str(input("Word 4 = ")) e = str(input("Word 5 = ")) f = str(input("Word 6 = ")) words = [a, b, c, d, e, f] words_mentioned = [] def count_words(words): for i in words: wordscount = {i:words.count(i)} if i not in words_mentioned: print(wordscount) words_mentioned.append(i) count_words(words)
Python, Python | Count occurrences of each word in given text file (Using dictionary) � Python program to Count the Number of occurrences of a key-value� The Problem As an author I want to know the number of times each word appears in a sentence So that I can make sure I'm not repeating myself. Acceptance Criteria Given a sentence When the program is run Then I'm returned a distinct list of words in the sentence and the number of times they have occurred
In order to not count words more than once, you could use a set in your
for i in words: loop: replace it with
for i in set(words):
You could also use the Counter() class from iterTools:
from itertools import Counter print( Counter(words) )
14. List Algorithms — How to Think Like a Computer Scientist , Test-driven development (TDD) is a software development practice which takes Notice we were a bit lazy, and used split to create our list of words — it is repeatedly leads us to a very much better algorithm for searching in a list of Do you think it is a coincidence that there are no repeated numbers in the solution? Test how many times a word repeats in list If I understand your requirement correctly you want something along these lines. Create two new lists - ‘unique’ and ‘counts’ - making sure they're cleared before you start.
7. Strings — How to Think Like a Computer Scientist: Learning with , If you want the zero-eth letter of a string, you just put 0, or any expression with Each time through the loop, the next character in the string is assigned to the variable char. Abecedarian refers to a series or list in which the elements appear in A more difficult problem is making the program realize that zebras are not fruit. Word searches the current page, from top to bottom, for the specified style. If the style isn't found, Word searches next from the top of the page to the beginning of the document, and then from the bottom of the page to the end of the document.
Generating a Count of Word Occurrences (Microsoft Word), If you are searching for individual words, make sure you click the list is in descending order based on how many times the word If you don't like to use macros for some reason, there are other programs you can use to create word counts. What if I dont know the word and want to find repeating words! Sorting the list is as easy as calling the sort() function. After the example calls the sort() function, it prints the list again so that you can see the result. Choose Run→Run Module. You see a Python Shell window open. The application outputs both the unsorted and sorted lists. You may need to sort items in reverse order at times.
Python Data Type: List - Exercises, Practice, Solution, Practice with solution of exercises on Python Data Types: examples Write a Python program to find the list of words that are longer than n from a given list of words. ancient algorithm for finding all prime numbers up to any given limit. Write a Python program to create a list by concatenating a given list� count() is an inbuilt function in Python that returns count of how many times a given object occurs in list. Syntax : list_name.count(object) Parameters : Object is the things whose count is to be returned.
- Thanks a lot I am a beginner and still learning and I really appreciate the answer
- I'd really appreciate it if you could mark my answer as correct by clicking the grey tick mark next to my answer. It really does help!
|
http://thetopsites.net/article/55029702.shtml
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
How to Migrate a Project From ASP.NET MVC to ASP.NET Core
Get going with ASP.NET Core.
Join the DZone community and get the full member experience.Join For Free
Step-by-Step Guide
Here is a practical guide on migrating a project from ASP.NET MVC framework to ASP.NET Core. The step-by-step instructions written by the team working on the nopCommerce open-source project, and they can be easily applied to any ASP.NET MVC project.
We can say with confidence that we were the first large project to carry out such a migration. In this guide, we tried to put together the entire migration process in a structured form and describe various bottlenecks so that other developers could rely on this material and follow the roadmap when solving the same task.
You may also like: Core ASP.NET.
Why Port to ASP.NET Core?
Before proceeding to the steps on porting from ASP.NET MVC to ASP.NET Core, let's take the quick overview of the framework's advantages.
ASP.NET Core is already a fairly well-known framework with several major updates making it quite stable, advanced, and resistant to XSRF/CSRF attacks.
Cross-platform is one of the distinguishing features making it more and more popular. From now on, your web application can run both in Windows and Unix environment.
ASP.NET Core comes in the form of NuGet packages. It allows application optimization, including the selected required packages. This improves solution performance and reduces the time it takes to upgrade separate parts of the application. This is the second important characteristic that allows developers to integrate new features into their solution more flexible.
Performance is another step towards building a high-performance application. ASP.NET Core handles 2,300% more requests per second than ASP.NET 4.6, and 800% more requests per second than Node.js. You can check the detailed performance tests yourself here or here.
Middleware is a new light and fast modular pipeline for in-app requests. Each part of middleware processes an HTTP request; it then either decides to return the result or passes the next part of middleware. This approach gives the developer full control over the HTTP pipeline and contributes to the development of simple modules for the application, which is important for a growing, open source project.
Also, ASP.NET Core MVC provides features that simplify web development. nopCommerce already used some of them, such as the Model-View-Controller template, Razor syntax, model binding, and validation. Among the new features are:
Tag Helpers: Server-part code for participation in creating and rendering HTML elements in Razor files.
View components: A new tool, similar to partial views, but of much higher performance. nopCommerce uses view components when reusing rendering logic is required and the task is too complex for partial view.
DI in views: Although most of the data displayed in views come from the controller, nopCommerce also has views where dependency injection is more convenient.
Of course, ASP.NET Core has many more features, but we viewed what we consider to be the most interesting ones only.
Now, let's consider the points to keep in mind when porting your app to a new framework.
Migration
The following descriptions contains large amount of links to the official ASP.NET Core documentation to give more detailed information about the topic and guide developers who are confronting this kind of task for the first time.
Step 1: Preparing a Toolkit
The first thing you need is to upgrade Visual Studio 2017 to version 15.3 or later and install the latest version of .NET Core SDK.
Before porting, we advise using .Net Portability Analyzer. This can be a good starting point to understand how labor-intensive porting from one platform to another will be. Nevertheless, this tool does not cover all the issues, and this process has many pitfalls to be solved as they emerge. Below, we will describe the main steps and the solutions used in the nopCommerce project.
The first and the easiest thing to do is to update links to the libraries used in the project so they support .NET Standard.
Step 2: NuGet Package Compatibility Analysis to Support .Net standard
If you use NuGet packages in your project, check whether they are compatible with .NET Core. One way to do this is to use the NuGetPackageExplorer tool.
Step 3: The New Format of the Csproj File in .NET Core
A new approach for adding references to third-party packages was introduced in .NET Core. When adding a new class library, you need to open the main project file and replace its contents as follows:
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>netcoreapp2.2</TargetFramework> </PropertyGroup> <ItemGroup> <PackageReference Include="Microsoft.AspNetCore.App" Version="2.2.6" /> ... </ItemGroup> ... </Project>
References to the connected libraries will be loaded automatically. For more information on comparing the project.json and CSPROJ properties, read the official documentation here and here.
Step 4. Namespace Update
Delete all uses of System.Web and replace them with Microsoft.AspNetCore
Step 5. Configure the Startup.cs File Instead of Using Global.asax
ASP.NET Core has a new way of loading applications. The app entry point is
Startup, and there is no dependency on the
Global.asax file.
Startup registers the middlewares in the app.
Startup must include the
Configure method. The required middleware should be added to the pipeline in
Configure.
Issues to solve in Startup.cs:
Configuring middleware for MVC and WebAPI requests
Configuring for:
Exception handling: You will inevitably face various collisions during porting, thus be ready and set up exception handling in the development environment. With UseDeveloperExceptionPage, we add middleware to catch exceptions.
MVC routing: Registration of new routes has also been changed. IRouteBuilder is now used instead of RouteCollection, as a new way to register restrictions (IActionConstraint).
MVC/WebAPI filters. The filters should be updated in accordance with the new implementation of ASP.NET Core.
-
-
//add basic MVC feature var mvcBuilder = services.AddMvc(); //add custom model binder provider (to the top of the provider list) mvcBuilder.AddMvcOptions(options => options.ModelBinderProviders.Insert(0, new NopModelBinderProvider()));
/// <summary> /// Represents model binder provider for the creating NopModelBinder /// </summary> public class NopModelBinderProvider : IModelBinderProvider { /// <summary> /// Creates a nop model binder based on passed context /// </summary> /// <param name="context">Model binder provider context</param> /// <returns>Model binder</returns> public IModelBinder GetBinder(ModelBinderProviderContext context) { if (context == null) throw new ArgumentNullException(nameof(context)); var modelType = context.Metadata.ModelType; if (!typeof(BaseNopModel).IsAssignableFrom(modelType)) return null; //use NopModelBinder as a ComplexTypeModelBinder for BaseNopModel if (context.Metadata.IsComplexType && !context.Metadata.IsCollectionType) { //create binders for all model properties var propertyBinders = context.Metadata.Properties .ToDictionary(modelProperty => modelProperty, modelProperty => context.CreateBinder(modelProperty)); return new NopModelBinder(propertyBinders, EngineContext.Current.Resolve<ILoggerFactory>()); } //or return null to further search for a suitable binder return null; } }
Areas: To include Area in an ASP.NET Core app, add a regular route to the Startup.cs file. For example, this is how it will look for configuring Admin/area:
app.UseMvc(routes => { routes.MapRoute("areaRoute", "{area:exists}/{controller=Admin}/{action=Index}/{id?}"); routes.MapRoute( name: "default", template: "{controller=Home}/{action=Index}/{id?}"); });
When doing this, the folder with the name
Area with the Admin folder inside should be in the app root. Now, the attribute
[Area("Admin")] [Route("admin")], will be used to connect the controller with this area.
Now, all we need to do is create views for all the actions described in the controller.
[Area("Admin")] [Route("admin")] public class AdminController : Controller { public IActionResult Index() { return View(); } }
Validation
IFormCollection should not be passed to the controllers, since in this case, asp.net server validation is disabled - MVC is suppressing further validation if the IFormCollection is found to be not null. To solve the problem, this property might be added to the model, this will prevent us from passing directly to the controller method. This rule works only if a model is available, if there is no model, there will be no validation.
Child properties are no longer automatically validated and should be specified manually.
Step 6: Migrate HTTP handlers and HttpModules to Middleware
HTTP handlers and HTTP modules are in fact very similar to the concept of Middleware in ASP.NET Core, but unlike modules, the middleware order is based on the order in which they are inserted into the request pipeline. The order of modules is mainly based on the events of the application life cycle.
The order of middleware for responses is opposite to the order for requests, while the order of modules for requests and responses is the same. Knowing this, you can proceed with the update.
What should be updated:
Migration of modules for Middleware (AuthenticationMiddleware, CultureMiddleware, etc.)
-
Use of new middleware.
Authentication in nopCommerce does not use a built-in authentication system; for this purpose, AuthenticationMiddleware developed in accordance with the new ASP.NET Core structure is used.
public class AuthenticationMiddleware { private readonly RequestDelegate _next; public AuthenticationMiddleware(IAuthenticationSchemeProvider schemes, RequestDelegate next) { Schemes = schemes ?? throw new ArgumentNullException(nameof(schemes)); _next = next ?? throw new ArgumentNullException(nameof(next)); } public IAuthenticationSchemeProvider Schemes { get; set; } public async Task Invoke(HttpContext context) { context.Features.Set<IAuthenticationFeature>(new AuthenticationFeature { OriginalPath = context.Request.Path, OriginalPathBase = context.Request.PathBase }); var handlers = context.RequestServices.GetRequiredService<IAuthenticationHandlerProvider>(); foreach (var scheme in await Schemes.GetRequestHandlerSchemesAsync()) { try { if (await handlers.GetHandlerAsync(context, scheme.Name) is IAuthenticationRequestHandler handler && await handler.HandleRequestAsync()) return; } catch { // ignored } } var defaultAuthenticate = await Schemes.GetDefaultAuthenticateSchemeAsync(); if (defaultAuthenticate != null) { var result = await context.AuthenticateAsync(defaultAuthenticate.Name); if (result?.Principal != null) { context.User = result.Principal; } } await _next(context); } }
ASP.NET provides a lot of built-in middleware that you can use in your application, but developers can also create their own middleware and add it to the HTTP request pipeline. To simplify this process, we added a special interface to nopCommerce, and now it’s enough to just create a class that implements it.
public interface INopStartup { /// <summary> /// Add and configure any of the middleware /// </summary> /// <param name="services">Collection of service descriptors</param> /// <param name="configuration">Configuration of the application</param> void ConfigureServices(IServiceCollection services, IConfiguration configuration); /// <summary> /// Configure the using of added middleware /// </summary> /// <param name="application">Builder for configuring an application's request pipeline</param> void Configure(IApplicationBuilder application); /// <summary> /// Gets order of this startup configuration implementation /// </summary> int Order { get; } }
Here, you can add and configure your middleware:
/// <summary> /// Represents object for the configuring authentication middleware on application startup /// </summary> public class AuthenticationStartup : INopStartup { /// <summary> /// Add and configure any of the middleware /// </summary> /// <param name="services">Collection of service descriptors</param> /// <param name="configuration">Configuration of the application</param> public void ConfigureServices(IServiceCollection services, IConfiguration configuration) { //add data protection services.AddNopDataProtection(); //add authentication services.AddNopAuthentication(); } /// <summary> /// Configure the using of added middleware /// </summary> /// <param name="application">Builder for configuring an application's request pipeline</param> public void Configure(IApplicationBuilder application) { //configure authentication application.UseNopAuthentication(); } /// <summary> /// Gets order of this startup configuration implementation /// </summary> public int Order => 500; //authentication should be loaded before MVC }
Step 7: Using Built-in DI
Dependency injection is one of the key features to keep in mind when designing an app in ASP.NET Core. You can develop loosely-coupled applications that are more testable, modular, and as a result, more maintainable.
This was made possible by following the principle of dependency inversion. To inject dependencies, we used IoC (Inversion of Control) containers. In ASP.NET Core, such a container is represented by the
IServiceProvider interface. Services are installed in the app in the
Startup.ConfigureServices() method.
Any registered service can be configured with three scopes:
transient.
scoped.
singleton.
services.AddDbContext<ApplicationDbContext>(options => options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection"))); services.AddSingleton<Isingleton,MySingleton>();
Step 8: Using WebAPI Project Compatibility Shells (Shim)
To simplify the migration of an existing Web API, we advise using the NuGet package Microsoft.AspNetCore.Mvc.WebApiCompatShim. It supports the following compatible features:
Adding
ApiControllertype.
Enabling web API style model binding.
Extending model binding so that controller actions can accept
HttpRequestMessagetype parameters;
Adding message formatters enabling actions to return results of
HttpResponseMessagetype.
services.AddMvc().AddWebApiConventions(); routes.MapWebApiRoute(name: "DefaultApi", template: "api/{controller}/{id?}" );
Step 9: Porting Application Configuration
Some settings were previously saved in the web.config file. Now, we use a new approach based on the key-value pairs set by configuration providers. This is the recommended method in ASP.NET Core, and we use the
appsettings.json file.
You can also use the NuGet package
System.Configuration.ConfigurationManager if, for some reason, you want to continue using
*.config. In this case, the app cannot run on Unix platforms, but on IIS only.
If you want to use the Azure key storage configuration provider, then you need to refer to Migrating content to Azure Key Vault. Our project did not contain such a task.
Step 10: Porting Static Content to Wwwroot
To serve static content, specify to web host the content root of the current directory. The default is
wwwroot. You can configure your folder for storing static files by setting up middleware.
Step 11: Porting EntityFramework to EF Core
If the project uses some specific features of Entity Framework 6, that are not supported in EF Core, it makes sense to run the application on the NET Framework. Though, in this case, we will have to reject the multi-platform feature, and the application will run on Windows and IIS only.
Below are the main changes to be considered:
The
System.Data.Entitynamespace is replaced by
Microsoft.EntityFrameworkCore.
The signature of the
DbContextconstructor has been changed. Now you should inject
DbContextOptions.
The
HasDatabaseGeneratedOption(DatabaseGeneratedOption.None)method is replaced by
ValueGeneratedNever().
The
WillCascadeOnDelete(false)method is replaced by the
OnDelete(DeleteBehavior.Restrict)method.
The
OnModelCreating(DbModelBuilder modelBuilder)method is replaced by the
OnModelCreating(ModelBuilder modelBuilder)method.
The
HasOptionalmethod is no longer available.
Object configuration is changed. Now,
OnModelCreatingis used since
EntityTypeConfigurationis no longer available.
The
ComplexTypeattribute is no longer available.
The
IDbSetinterface is replaced by
DbSet.
ComplexType— complex type support appeared in EF Core 2 with the Owned Entity type, and tables without a Primary Key with QueryType in EF Core 2.1;
External keys in EF Core generate shadow properties using the
[Entity]Idtemplate, unlike EF6, that uses the
[Entity]_Id template. Therefore, add external keys as a regular property to the entity first.
To support DI for
DbContext, configure your
DbContexin
ConfigureServices.
/// <summary> /// Register base object context /// </summary> /// <param name="services">Collection of service descriptors</param> public static void AddNopObjectContext(this IServiceCollection services) { services.AddDbContextPool<NopObjectContext>(optionsBuilder => { optionsBuilder.UseSqlServerWithLazyLoading(services); }); } /// <summary> /// SQL Server specific extension method for Microsoft.EntityFrameworkCore.DbContextOptionsBuilder /// </summary> /// <param name="optionsBuilder">Database context options builder</param> /// <param name="services">Collection of service descriptors</param> public static void UseSqlServerWithLazyLoading(this DbContextOptionsBuilder optionsBuilder, IServiceCollection services) { var nopConfig = services.BuildServiceProvider().GetRequiredService<NopConfig>(); var dataSettings = DataSettingsManager.LoadSettings(); if (!dataSettings?.IsValid ?? true) return; var dbContextOptionsBuilder = optionsBuilder.UseLazyLoadingProxies(); if (nopConfig.UseRowNumberForPaging) dbContextOptionsBuilder.UseSqlServer(dataSettings.DataConnectionString, option => option.UseRowNumberForPaging()); else dbContextOptionsBuilder.UseSqlServer(dataSettings.DataConnectionString); }
To verify that EF Core generates a similar database structure as Entity Framework when migrating, use the SQL Compare tool.
Step 12: Remove all HttpContext References and Replce Obsolete Classes
During the project migration, you will find that a sufficiently large number of classes have been renamed or moved, and now, you should comply with the new requirements. Here is a list of the main changes you may encounter:
HttpPostedFileBase-->
FormFile.
HttpContextcan now be accessed via
IHttpContextAccessor.
HtmlHelper-->
HtmlHelper.
ActionResult-->
ActionResult.
HttpUtility-->
WebUtility.
ISessioninstead of
HttpSessionStateBaseis accessible from
HttpContext.Sessionfrom
Microsoft.AspNetCore.Http.
Request.Cookiesreturns IRequestCookieCollection: IEnumerable <KeyValuePair<string, string> >. Then, instead of
HttpCookie, we use KeyValuePair <string, string> from
Microsoft.AspNetCore.Http.
Namespace replacements:
SelectList-->
Microsoft.AspNetCore.Mvc.Rendering.
UrlHelper-->
WebUtitlity.
MimeMapping-->
FileExtensionContentTypeProvider.
MvcHtmlString-->
IHtmlStringand
HtmlString.
ModelState,
ModelStateDictionary,
ModelError->
Microsoft.AspNetCore.Mvc.ModelBinding.
FormCollection-->
IFormCollection.
Request.Url.Scheme-->
this.Url.ActionContext.HttpContext.Request.Scheme.
Other:
MvcHtmlString.IsNullOrEmpty(IHtmlString)>
String.IsNullOrEmpty(variable.ToHtmlString())
[ValidateInput (false)]does not exist anymore and is no longer needed.
HttpUnauthorizedResult-->
UnauthorizedResult.
The
[AllowHtml]directive does not exist anymore and is no longer needed.
TagBuilder.SetInnerTextmethod is replaced by
InnerHtml.AppendHtml.
JsonRequestBehavior.AllowGetwhen returning JSON is no longer needed.
HttpUtility.JavaScriptStringEncode-->
JavaScriptEncoder.Default.Encode
Request.RawUrl. Request.Pathand
Request.QueryStringshould be separately connected.
AllowHtmlAttributeclass no longer exists.
XmlDownloadResult— now you can use just return
File(Encoding.UTF8.GetBytes (xml), "application / xml", "filename.xml").
The
[ValidateInput(false)]directive does not exist anymore and is no longer needed.
Step 13: Authentication and Authorization Update
As was already mentioned previously, the nopCommerce project does not involve the built-in authentication system, as it is implemented in a separate middleware layer. However, ASP.NET Core has its own system for credentials providing. You can view the documentation to know about them in details.
As for data protection, we no longer use MachineKey. Instead, we use the built-in data protection feature. By default, keys are generated when the application starts. As the data storage can be:
File system - file system-based keystore.
Azure Storage - data protection keys in Azure BLOB object storage.
Redis - data protection keys in the Redis cache.
Registry - used if the application does not have access to the file system.
EF Core - keys are stored in the database.
If the built-in providers are not suitable, you can specify your own key storage provider by making a custom IXmlRepository.
Step 14: JS/CSS Update
The way of using static resources has changed, now they should all be stored in the root folder of the project
wwwroot, unless other settings are made.
When using JavaScript built-in blocks, we recommend moving them to the end of the page. Just use the
asp-location = "Footer" attribute for your <script> tags. The same rule applies to JavaScript files.
Use the BundlerMinifier extension as a replacement for
System.Web.Optimization. This will enable bundling and minification. JavaScript and CSS while building the project (view the documentation).
Step 15: Porting Views
First of all, Child Actions are no longer used, instead, ASP.NET Core suggests using a new high-performance tool — ViewComponents called asynchronously.
How to get a string from
ViewComponent:
/// <summary> /// Render component to string /// </summary> /// <param name="componentName">Component name</param> /// <param name="arguments">Arguments</param> /// <returns>Result</returns> protected virtual string RenderViewComponentToString(string componentName, object arguments = null) { if (string.IsNullOrEmpty(componentName)) throw new ArgumentNullException(nameof(componentName)); var actionContextAccessor = HttpContext.RequestServices.GetService(typeof(IActionContextAccessor)) as IActionContextAccessor; if (actionContextAccessor == null) throw new Exception("IActionContextAccessor cannot be resolved"); var context = actionContextAccessor.ActionContext; var viewComponentResult = ViewComponent(componentName, arguments); var viewData = ViewData; if (viewData == null) { throw new NotImplementedException(); } var tempData = TempData; if (tempData == null) { throw new NotImplementedException(); } using (var writer = new StringWriter()) { var viewContext = new ViewContext( context, NullView.Instance, viewData, tempData, writer, new HtmlHelperOptions()); // IViewComponentHelper is stateful, we want to make sure to retrieve it every time we need it. var viewComponentHelper = context.HttpContext.RequestServices.GetRequiredService<IViewComponentHelper>(); (viewComponentHelper as IViewContextAware)?.Contextualize(viewContext); var result = viewComponentResult.ViewComponentType == null ? viewComponentHelper.InvokeAsync(viewComponentResult.ViewComponentName, viewComponentResult.Arguments): viewComponentHelper.InvokeAsync(viewComponentResult.ViewComponentType, viewComponentResult.Arguments); result.Result.WriteTo(writer, HtmlEncoder.Default); return writer.ToString(); } }
Note that there is no need to use
HtmlHelper anymore, ASP.NET Core includes many auxiliary built-in Tag Helpers. When the application is running, the Razor engine handles them on the server and ultimately converts to standard html elements. This makes application development a whole lot easier. And of course, you can implement your own tag helpers.
We started using dependency injection in views instead of enabling settings and services using EngineContext.
So, the main points on porting views are as follows:
Convert
Views/web.config to Views/_ViewImports.cshtmlto import namespaces and inject dependencies. This file does not support other Razor features, such as function and section definitions
Convert
namespaces.addto
@using.
Porting any settings to the main application configuration
Scripts.Renderand Styles.Render do not exist. Replace with links to output data of libman or BundlerMinifier
Conclusion
The process of migrating a large web application is a very time-consuming task, which, as a rule, cannot be carried out without pitfalls. We planned to migrate to a new framework as soon as its first stable version was released but were not able to make it right away, as there were some critical features that had not been transferred to .NET Core, in particular, those related to EntityFramework.
Therefore, we had first to make our release using a mixed approach — the .NET Core architecture with the .NET Framework dependencies, which in itself is a unique solution. Being first is not easy, but we are sure we’ve made the right choice, and our huge community supported us in this.
We were able to fully adapt our project after the release of .NET Core 2.1, having by that time a stable solution already working on the new architecture. It remained only to replace some packages and rewrite the work with EF Core. Thus, it took us several months and two released versions to completely migrate to the new framework.
Further Reading
Published at DZone with permission of Dmitriy Kulagin. See the original article here.
Opinions expressed by DZone contributors are their own.
|
https://dzone.com/articles/how-to-migrate-project-from-aspnet-mvc-to-aspnet-c
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
In the first tutorial we looked at creating our initial application, while in the second tutorial we looked at events and a cleaner game loop then in the third tutorial we looked at drawing bitmaps on screen. Today we are going to look at handling keyboard and mouse input. We covered this slightly in earlier tutorials when we looked at event queues and enabled the application to exit when the user pressed a key. This tutorial will go into a great deal more depth. Without further ado, let’s jump into the code sample:
#include "stdafx.h" #include <allegro5allegro.h> #include <allegro5allegro_image.h> int main() { ALLEGRO_DISPLAY * display; ALLEGRO_EVENT_QUEUE *queue; ALLEGRO_BITMAP * bitmap = NULL; al_init(); display = al_create_display(640, 480); queue = al_create_event_queue(); al_install_keyboard(); al_install_mouse(); al_register_event_source(queue, al_get_keyboard_event_source()); al_register_event_source(queue, al_get_display_event_source(display)); al_register_event_source(queue, al_get_mouse_event_source()); al_init_image_addon(); bitmap = al_load_bitmap("image64x64.jpg"); assert(bitmap != NULL); bool running = true; float x = 0, y = 0; int width = al_get_display_width(display); while (running) { al_clear_to_color(al_map_rgba_f(0, 0, 0, 1)); al_draw_bitmap(bitmap, x, y, 0); al_flip_display(); ALLEGRO_EVENT event; if (!al_is_event_queue_empty(queue)) { al_wait_for_event(queue, &event); if (event.type == ALLEGRO_EVENT_DISPLAY_CLOSE) running = false; if (event.type == ALLEGRO_EVENT_MOUSE_AXES) { x = event.mouse.x; y = event.mouse.y; } if (event.type == ALLEGRO_EVENT_MOUSE_BUTTON_UP) { x = y = 0; al_set_mouse_xy(display, 0, 0); } } // Actively poll the keyboard ALLEGRO_KEYBOARD_STATE keyState; al_get_keyboard_state(&keyState); if (al_key_down(&keyState, ALLEGRO_KEY_RIGHT)) if (al_key_down(&keyState, ALLEGRO_KEY_LCTRL)) x += 0.1; else x += 0.01; if (al_key_down(&keyState, ALLEGRO_KEY_LEFT)) if (al_key_down(&keyState, ALLEGRO_KEY_LCTRL)) x -= 0.1; else x -= 0.01; } al_destroy_display(display); al_uninstall_keyboard(); al_uninstall_mouse(); al_destroy_bitmap(bitmap); return 0; }
When you run this application, the sprite will be drawn in the position of the mouse cursor. Clicking and mouse button will reset the sprite’s position back to (0,0). You can also control the position of the sprite by using the left and right arrow keys.
As we can see mouse and keyboard functionality is optional, so we need to call the appropriate init() function for both devices, then later the appropriate uninstall() function to clean them up. Just like earlier we create an event queue, and register additional event sources for both keyboard and mouse. As you can see, multiple event types can exist in a single event queue.
Mouse handling is extremely straight forward, when the mouse is moved it will fire a ALLEGRO_MOUSE_AXES_EVENT with the mouse location stored in the events mouse.x and mouse.y properties. You can also see that mouse button activity fire a ALLEGRO_MOUSE_BUTTON_UP_EVENT when a mouse button is released. In this case we set the position of the mouse using al_set_mouse_xy().
We take a slightly different approach with keyboard handling. We could use a similar approach, however it would only handle one key event at a time. What we instead want to do is poll the current status of the keyboard every pass through our game loop. We take this approach so we can detect multiple concurrent key presses. This is done by populating a ALLEGRO_KEYBOARD_STATE structure with a call to al_get_keyboard_state(). We can then poll the status of individual keys by calling al_key_down().
Back to Table Of Contents
|
https://gamefromscratch.com/allegro-tutorial-series-part-4-keyboard-and-mouse/
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
Download presentation
Presentation is loading. Please wait.
Published byLillian Donnelly Modified over 6 years ago
1
UKOLN is supported by: The JISC Information Environment Bath Profile Four Years On: whats being done in the UK? 7 th July 2003 Andy Powell, UKOLN, University of Bath a.powell@ukoln.ac.uk A centre of expertise in digital information management
2
Contents JISC Information Environment technical architecture –putting Z39.50 and the Bath Profile in a national context JISC IE service registry –disclosing the existence of Bath Profile targets technical issues –Z39.50/Bath Profile and other discovery technologies
3
Simple scenario consider a researcher searching for material to inform a research paper on HIV and/or AIDS he or she searches for hiv aids using: –the RDN, to discover Internet resources –ZETOC, to discover recent journal articles (and, of course, he or she may use a whole range of other search strategies using other services as well)
6
Issues different user interfaces –look-and-feel –subject classification, metadata usage everything is HTML – human-oriented –difficult to merge results, e.g. combine into a list of references –difficult to build a reading list to pass on to students –need to manually copy-and-paste search results into HTML page or MS-Word document or desktop reference manager or …
7
Issues (2) difficult to move from discovering journal article to having copy in hand (or on desktop) users need to manually join services together problem with hardwired links to books and journal articles, e.g. –lecturer links to university library OPAC but student is distance learner and prefers to buy online at Amazon –lecturer links to IngentaJournals but student prefers paper copy in library
8
The problem space… from perspective of data consumer –need to interact with multiple collections of stuff - bibliographic, full-text, data, image, video, etc. –delivered thru multiple Web sites –few cross-collection discovery services (with exception of big search engines like Google, but lots of stuff is not available to Google, i.e. it is part of the invisible Web) from perspective of data provider –few agreed mechanisms for disclosing availability of content
9
UK JISC IE context… 206 collections and counting… (Hazel Woodward, e-ICOLC, Helsinki, Nov 2001) –Books: 10,000 + –Journals: 5,000 + –Images:250,000 + –Discovery tools:50 + A & I databases, COPAC, RDN, … –National mapping data & satellite imagery plus institutional content (e-prints, research data, library content, learning resources, etc.) plus content made available thru projects – 5/99, FAIR, X4L, … plus …
10
The problem(s)… portal problem –how to provide seamless discovery across multiple content providers appropriate-copy problem –how to provide access to the most appropriate copy of a resource (given access rights, preferences, cost, speed of delivery, etc.)
11
A solution… an information environment framework of machine-oriented services allowing the end-user to –discover, access, use and publish resources across a range of content providers move away from lots of stand-alone Web sites......towards more coherent whole remove need for use to interact with multiple content providers –note: remove need rather than prevent
A note about portals portal word possibly slightly misleading the JISC IE architecture supports many different kinds of user-focused services… –subject portal –reading list and other tools in VLE –commercial portals (ISI Web of Knowledge, ingenta, Bb Resource Center, etc.) –library portal (e.g. Zportal or MetaLib) –SFX service component –personal desktop reference manager (e.g. Endnote) –increasingly rich browser-based tools – XSLT, Javascript, Java, SOAP, …
14
Discovery technologies that allow providers to disclose metadata to portals –searching - Z39.50 (Bath Profile Functional Area C), and SRW –harvesting - OAI-PMH –alerting - RDF Site Summary (RSS) fusion services may sit between provider and portal –broker (searching) –aggregator (harvesting and alerting) –catalogue (manually created records) –index (machine-generated full-text index) institutional preferences, cost, access rights, location, etc.
16
Shared services service registry –information about collections (content) and services (protocol) that make that content available authentication and authorisation OpenURL and other resolver services user preferences and institutional profiles terminology services metadata registries...
17
JISC Information Environment
18
Summary Z39.50 (Bath Profile), OAI, RSS are key discovery technologies... –… and by implication, XML and simple/unqualified Dublin Core –anticipate growing requirement to transport qualified DC and IEEE LOM metadata access to resources via OpenURL and resolvers where appropriate Z39.50 and OAI not mutually exclusive general need for all services to know what other services are available to them
19
IE Service Registry IE Service Registry
20
IESR purpose to allow service components to discover and interact with other service components within the JISC IE collection descriptions (describing the content of collections) service descriptions (protocol-level detail about how to interact with service components) Z39.50, SRW, OAI-PMH, RSS, OpenURL resolvers, SOAP services, Web sites, CGI- based services ZeeRex
21
Z39.50 – one among many in the context of something like the JISC IE… Z39.50/Bath Profile is part of a bigger fabric of protocols (SRW, OAI_PMH, SOAP/XQuery, RDF/RDFQuery, …) many are based on XML and DC many developers will work across all the above desirable to have more consistent approaches to use of –XML, XML schemas vs. DTDs, XML namespaces
22
e-Learning and Bath Profile e-Learning seems to be a significant driving force behind cross-domain activity is there an argument that Bath Profile should cater better for e-Learning activities? –support for qualified DC (DC-Education) –support for IEEE LOM (as per IMS Digital Repositories Interoperability Spec.)
23
Conclusions Z39.50 and Bath Profile remains a key component in initiatives like the JISC IE but… it is only one component among many deployment and use is almost always in the context of other available technologies future work needs to be mindful of the way the Web is evolving (XML, URI, RDF, client/server, etc.) should IMS DRI (e-Learning work) be folded into Bath Profile?
Similar presentations
© 2021 SlidePlayer.com Inc.
|
http://slideplayer.com/slide/798369/
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
extract points within an image/volume mask More...
#include <vtkMaskPointsFilter.h>
extract points within an image/volume mask
vtkMaskPointsFilter extracts points that are inside an image mask. The image mask is a second input to the filter. Points that are inside a voxel marked "inside" are copied to the output. The image mask can be generated by vtkPointOccupancyFilter, with optional image processing steps performed on the mask. Thus vtkPointOccupancyFilter and vtkMaskPointsFilter are generally used together, with a pipeline of image processing algorithms in between the two filters.
Note also that this filter is a subclass of vtkPointCloudFilter which has the ability to produce an output mask indicating which points were selected for output. It also has an optional second output containing the points that were masked out (i.e., outliers) during processing.
Finally, the mask value indicating non-selection of points (i.e., the empty value) may be specified. The second input, masking image, is typically of type unsigned char so the empty value is of this type as well.
Definition at line 59 of file vtkMaskPointsFilter.h.
Definition at line 68 of file vtkMaskPointsFilter masking image.
It must be of type vtkImageData.
Specify the masking image.
It is vtkImageData output from an algorithm.
Set / get the values indicating whether a voxel is empty.
By default, an empty voxel is marked with a zero value. Any point inside a voxel marked empty is not selected for output. All other voxels with a value that is not equal to the empty value are selected for output.
Implements vtkPointCloudFilter..
Reimplemented from vtkPolyDataAlgorithm.
This is called by the superclass.
This is the method you should override.
Reimplemented from vtkPolyDataAlgorithm.
Definition at line 100 of file vtkMaskPointsFilter.h.
Definition at line 111 of file vtkMaskPointsFilter.h.
|
https://vtk.org/doc/nightly/html/classvtkMaskPointsFilter.html
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
This tutorial demonstrates how to use cluster network policies to control which Pods receive incoming network traffic, and which Pods can send outgoing traffic. For more information, see Creating a cluster network policy.
Network policies allow you to limit connections between Pods. Therefore, using network policies provide better security by reducing the compromise radius./network-policies
Set defaults for the
To save time typing your project ID
and Compute Engine zone options in the
gcloud command-line tool
gcloudcommand-line tool, you can set the defaults:
gcloud config set project project-id gcloud config set compute/zone compute-zone
Creating a GKE cluster
To create a container cluster with network policy enforcement, run the following command:
gcloud container clusters create test --enable-network-policy
Restricting incoming traffic to Pods
Kubernetes
NetworkPolicy resources let you configure network access policies
for the Pods.
NetworkPolicy objects contain the following information:
Pods the network policies apply to, usually designated by a label selector
Type of internet traffic the network policy affects: Ingress for incoming traffic, Egress for outgoing traffic, or both
For Ingress policies, which Pods can connect to the specified Pods
For Egress policies, the Pods to which the specified Pods can connect
First, run a web server application with label
app=hello and expose it
internally in the cluster:
kubectl run hello-web --labels app=hello \ --image=gcr.io/google-samples/hello-app:1.0 --port 8080 --expose
Next, configure a
NetworkPolicy to allow traffic to the
hello-web
Pods from only the
app=foo Pods. Other incoming traffic from Pods that do
not have this label, external traffic, and traffic from Pods in other namespaces
are blocked.
The following manifest selects Pods with label
app=hello and specifies an
Ingress policy to allow traffic only from Pods with the label
app=foo:
Restricting outgoing traffic from the Pods
You can restrict outgoing enable egress network policies, deploy a
NetworkPolicy controlling
outbound traffic from Pods with the label
app=foo while allowing traffic only
to Pods with the label
app=hello, as well as the DNS traffic.
The following manifest specifies a network policy controlling the egress traffic
from Pods with label
app=foo with two allowed destinations:
- Pods in the same namespace with the label
app=hello.
- Cluster Pods or external endpoints on port 53 (UDP and TCP).
kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: foo-allow-to-hello spec: policyTypes: - Egress podSelector: matchLabels: app: foo egress: - to: - podSelector: matchLabels: app: hello - ports: - port: 53 protocol: TCP - port: 53 protocol: UDP
To apply this policy to the cluster, run the following command:
kubectl apply -f foo-allow-to-hello.yaml the label
app=foo and open a shell account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.
Delete the container cluster: This step will delete the resources that make up the container cluster, such as the compute instances, disks and network resources.
gcloud container clusters delete test
|
https://cloud.google.com/kubernetes-engine/docs/tutorials/network-policy?hl=nl
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
Communication
Exception Class
Definition
Represents a communication error in either the service or client application.
public ref class CommunicationException : Exception
public ref class CommunicationException : SystemException
public class CommunicationException : Exception
[System.Serializable] public class CommunicationException : SystemException
type CommunicationException = class inherit Exception
[<System.Serializable>] type CommunicationException = class inherit SystemException
Public Class CommunicationException Inherits Exception
Public Class CommunicationException Inherits SystemException
- Inheritance
-
- Inheritance
- CommunicationException
- Derived
-
- Attributes
-
Examples
The following code example shows a client that handles CommunicationException types. This client also handles FaultException objects because the service has IncludeExceptionDetailInFaults set to
true.
Remarks
Robust client and service Windows Communication Foundation (WCF) applications handle CommunicationException objects that may be thrown during communication. There are also two CommunicationException-derived exception types (FaultException<TDetail> and FaultException) that clients also often expect. Therefore, in order to prevent the generic CommunicationException handler from catching these more specific exception types, catch these exceptions prior to handling CommunicationException.
- FaultException.
Note
When implementing custom channels and binding elements, it is strongly recommended that your components throw only System.TimeoutException or CommunicationException-derived objects. In the case where your components throw a recoverable exception that is specific to the component, wrap that exception inside a CommunicationException object.
For more details about designing and using the WCF fault system, see Specifying and Handling Faults in Contracts and Services.
Important
The WCF Runtime will not throw a CommunicationException that is unsafe to handle at the point where it leaves the WCF Runtime and enters user code.
|
https://docs.microsoft.com/en-us/dotnet/api/system.servicemodel.communicationexception?view=dotnet-plat-ext-5.0&viewFallbackFrom=netcore-2.2
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
A number of years ago I did a post on the IronPython Cookbook site about the Windows.Forms Calendar control. I could never get the thing to render nicely on *nix operating systems (BSD family). It sounds as though Windows.Forms development for mono (and in general) is kind of dead, so there is not much hope that solution/example will ever render nicely on *nix. Recently I've been playing with mono and decided to give gtk-sharp a shot with IronPython.
Quick disclaimers:
1) I suspect from the examples I've seen on the internet that PyGtk is a little easier to deal with than gtk-sharp. That's OK; I wanted to use IronPython and have the rest of the mono/dotNet framework available, so I went through the extra trouble to forego CPython and PyGtk and go with IronPython and gtk-sharp instead.
2) The desktop is not the most cutting edge or sexy platform in 2014. Nonetheless, where I work it is alive and well. When I no longer see engineers hacking solutions in Excel and VBA, I'll consider the possibility of outliving the desktop. Right now I'm not hopeful :-\
The results aren't bad, at least as far as rendering goes. I couldn't get the Courier font to take on OpenBSD, but the Gtk Calendar control looks acceptable. All in all, I was OK with the results on both Windows and OpenBSD. I've heard Gtk doesn't do quite as well on Apple products, but I don't own a Mac to test with. Here are a couple screenshots:
I run the cwm window manager on OpenBSD and have it set up to cut out borders on windows, hence the more minimalist look to the control there.
IronPython output on *nix has always come out in yellow or white - it doesn't show up on a white background, which I prefer. In order to get around this, I run an xterm with a black background:
xterm -bg black -fg white
Here is the code for the gtk-sharp Gtk.Calendar control:
#!/usr/local/bin/mono /home/carl/IronPython-2.7.4/ipy64.exe
import clr
GTKSHARP = 'gtk-sharp'
PANGO = 'pango-sharp'
clr.AddReference(GTKSHARP)
clr.AddReference(PANGO)
import Gtk
import Pango
import datetime
TITLE = 'Gtk.Calendar Demo'
MARKUP = '<span font="Courier New" size="14" weight="bold">{:s}</span>'
MARKEDUPTITLE = MARKUP.format(TITLE)
INFOMSG = '<span font="Courier New 12">\n\n Program set to run for:\n\n '
INFOMSG += '{:%Y-%m-%d}\n\n</span>'
DATEDIFFMSG = '<span font="Courier New 12">\n\n '
DATEDIFFMSG += 'There are {0:d} days between the\n'
DATEDIFFMSG += ' beginning of the epoch and\n'
DATEDIFFMSG += ' {1:%Y-%m-%d}.\n\n</span>'
ALIGNMENTPARAMS = (0.0, 0.5, 0.0, 0.0)
WINDOWWIDTH = 350
CALENDARFONT = 'Courier New Bold 12'
class CalendarTest(object):
inthebeginning = datetime.datetime.fromtimestamp(0)
# Debug info - make sure beginning of epoch really
# is +midnight, Jan 1, 1970 GMT.
print(inthebeginning)
def __init__(self):
Gtk.Application.Init()
self.window = Gtk.Window(TITLE)
# DeleteEvent - copied from Gtk demo on internet.
self.window.DeleteEvent += self.DeleteEvent
# Frame property provides a frame and title.
self.frame = Gtk.Frame(MARKEDUPTITLE)
self.calendar = Gtk.Calendar()
# Handles date selection event.
self.calendar.DaySelected += self.dateselect
# Sets up text for labels.
self.getcaltext()
# Puts little box around text.
self.datelabelframe = Gtk.Frame()
# Try to get datelabel to align with other label.
self.datelabelalignment = Gtk.Alignment(*ALIGNMENTPARAMS)
self.datelabel = Gtk.Label(self.caltext)
self.datelabelalignment.Add(self.datelabel)
self.datelabelframe.Add(self.datelabelalignment)
# Puts little box around text.
self.datedifflabelframe = Gtk.Frame()
self.datedifflabelalignment = Gtk.Alignment(*ALIGNMENTPARAMS)
self.datedifflabel = Gtk.Label(self.timedifftext)
self.datedifflabelalignment.Add(self.datedifflabel)
self.datedifflabelframe.Add(self.datedifflabelalignment)
self.vbox = Gtk.VBox()
self.vbox.PackStart(self.datelabelframe)
self.vbox.PackStart(self.datedifflabelframe)
self.vbox.PackStart(self.calendar)
self.frame.Add(self.vbox)
self.window.Add(self.frame)
self.prettyup()
self.window.ShowAll()
# Keep text viewable - size no smaller than intended.
self.window.AllowShrink = False
Gtk.Application.Run()
def getcaltext(self):
"""
Get messages for run date.
"""
# Calendar month is 0 based.
yearmonthday = self.calendar.Year, self.calendar.Month + 1, self.calendar.Day
chosendate = datetime.datetime(*yearmonthday)
self.caltext = INFOMSG.format(chosendate)
# For reporting of number of days since beginning of epoch.
timediff = chosendate - CalendarTest.inthebeginning
self.timedifftext = DATEDIFFMSG.format(timediff.days, chosendate)
def usemarkup(self):
"""
Refreshes UseMarkup property on widgets (labels)
so that they display properly and without
markup text.
"""
# Have to refresh this property each time.
self.frame.LabelWidget.UseMarkup = True
self.datelabel.UseMarkup = True
self.datedifflabel.UseMarkup = True
def prettyup(self):
"""
Get Gtk objects looking the way we
intended.
"""
# Try to make frame wider.
# XXX
# Works nicely on Windows - try on Unix.
# Allows bold, etc.
self.usemarkup()
self.frame.SetSizeRequest(WINDOWWIDTH, -1)
# Get rid of line in middle of text on title.
self.frame.Shadow = Gtk.ShadowType.None
# Try to get Courier New on calendar.
fd = Pango.FontDescription.FromString(CALENDARFONT)
self.calendar.ModifyFont(fd)
self.datelabel.Justify = Gtk.Justification.Left
self.datedifflabel.Justify = Gtk.Justification.Left
self.window.Title = ''
self.usemarkup()
def dateselect(self, widget, event):
self.getcaltext()
self.datelabel.Text = self.caltext
self.datedifflabel.Text = self.timedifftext
self.prettyup()
def DeleteEvent(self, widget, event):
Gtk.Application.Quit()
if __name__ == '__main__':
CalendarTest()
Thanks for stopping by.
|
http://pyright.blogspot.com/2014/10/mono-gtk-sharp-ironpython-calendarview.html
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
_DXGK_SETVIDPNSOURCEADDRESS_FLAGS structure
The DXGK_SETVIDPNSOURCEADDRESS_FLAGS structure identifies the specific type of operation to perform in a call to the display miniport driver's DxgkDdiSetVidPnSourceAddress or DxgkDdiSetVidPnSourceAddressWithMultiPlaneOverlay functions.
Syntax
typedef struct _DXGK_SETVIDPNSOURCEADDRESS_FLAGS { union { struct { UINT ModeChange : 1; UINT FlipImmediate : 1; UINT FlipOnNextVSync : 1; UINT FlipStereo : 1; UINT FlipStereoTemporaryMono : 1; UINT FlipStereoPreferRight : 1; UINT SharedPrimaryTransition : 1; UINT IndependentFlipExclusive : 1; UINT MoveFlip : 1; #if ... UINT Reserved : 23; #elif UINT Reserved : 24; #elif UINT Reserved : 25; #else UINT Reserved : 29; #endif }; UINT Value; }; } DXGK_SETVIDPNSOURCEADDRESS_FLAGS;
Remarks
If any of the FlipStereo, FlipStereoTemporaryMono, or FlipStereoPreferRight members are set, these conditions apply:
- The hAllocation member of the DXGKARG_SETVIDPNSOURCEADDRESS structure points to an allocation that is created with the Stereo member set in the Flags member of the D3DKMT_DISPLAYMODE structure.
- The PrimarySegment and PrimaryAddress members of DXGKARG_SETVIDPNSOURCEADDRESS point to the starting physical address of the allocation.
- The driver honors the settings of the FlipImmediate and FlipOnNextVSync members of the DXGK_SETVIDPNSOURCEADDRESS_FLAGS structure.
Requirements
See Also
DXGKARG_SETVIDPNSOURCEADDRESS
DXGK_SETVIDPNSOURCEADDRESS_FLAGS
DxgkDdiSetVidPnSourceAddress
DxgkDdiSetVidPnSourceAddressWithMultiPlaneOverlay
|
https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/content/d3dkmddi/ns-d3dkmddi-_dxgk_setvidpnsourceaddress_flags
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
[
]
Yohei Onishi commented on AIRFLOW-2385:
---------------------------------------
Yes, this job runs Spark job. I added on_kill method and stop Spark session. I will see
if it works. Any advices are highly welcomed.
{code:java}
def execute(self, context):
self.spark = self.create_spark_session()
// run some queries
def create_spark_session(self):
return SparkSession.builder. \
appName(self.app_name). \
config('spark.master', self.spark_master). \
config('spark.executor.cores', self.spark_executor_cores). \
config('spark.executor.memory', self.spark_executor_memory). \
config('spark.executor.instances', self.spark_num_executors). \
config('spark.yarn.queue', self.yarn_queue). \
enableHiveSupport(). \
getOrCreate()
def on_kill(self):
if (self.spark):
self.spark.stop()
{code}
> Airflow task is not stopped when execution timeout gets triggered
> -----------------------------------------------------------------
>
> Key: AIRFLOW-2385
> URL:
> Project: Apache Airflow
> Issue Type: Bug
> Components: DAG
> Affects Versions: 1.9.0
> Reporter: Yohei Onishi
> Priority: Major
>
> I have my own custom operator extends BaseOperator as follows. I tried to kill a task
if the task runs for more than 30 minutes. timeout seems to be triggered according to a log
but the task still continued.
> Am I missing something? I checked the official document but do not know what is wrong.[]
> My operator is like as follows.
> {code:java}
> class MyOperator(BaseOperator):
> @apply_defaults
> def __init__(
> self,
> some_parameters_here,
> *args,
> **kwargs):
> super(MyOperator, self).__init__(*args, **kwargs)
> # some initialization here
> def execute(self, context):
> # some code here
> {code}
>
> {{}}My task is like as follows.
> {code:java}
> t = MyOperator(
> task_id='task',
> dag=scheduled_dag,
> execution_timeout=timedelta(minutes=30)
> {code}
>
> I found this error but the task continued.
> {code:java}
> [2018-04-12 03:30:28,353] {base_task_runner.py:98} INFO - Subtask: [Stage 6:==================================================(1380
+ -160) / 1224][2018-04- 12 03:30:28,353] {timeout.py:36} ERROR - Process timed out
> {code}
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
|
http://mail-archives.apache.org/mod_mbox/airflow-commits/201804.mbox/%3CJIRA.13155597.1524799845000.371616.1524808860319@Atlassian.JIRA%3E
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
.
A tour de force. It reminds me of a project I once worked on that was written entirely in 2.4, and one of their vendors updated their API, which was only available in 2.7.
I ended up using JSON to communicate between a 2.4 and a 2.7 process. Wanted to stick my finger down my throat afterwards. The whole system was a pile of kluges held together by klugey string. Yerch.
They are the abominations that put food on our tables. A twitter friend related having to programmatically move the mouse to the location of a button and then click it. IDEK.
"The Kludge that Fed Me" :-(
You might be able to use Python's urllib to download the files. I have also used Selenium () to automate a browser and make it do what I want.
@Mike, thanks. I tried that among other things. They yield a "operation not permitted" message or error or text value.
For whatever reason, the site is locked down to force you to use a mouse and clicks and individually download each file. It's kind of a drag :-(
This makes it sound like there should be absolute URLs returned:. And I found this useful link that might work for you:
Mike, thanks again. I could be wrong, but I think they've got this site locked down to force the user to use IE9 and a mouse.
I've tried IronPython with the Sharepoint.Client dll. The usual response to anything I try to do is along the lines of
"AttributeError: attribute 'SPSite' of 'namespace#' object is read-only"
I was working from this site
trying to get started on using the SharePoint client object model.
It could be I'm not understanding something or missing something. So far I've hit a lot of dead ends.
Thanks for the input and for having a look at the blog.
As it turns out, there is access to Explorer View; it's just in a different location in the newer version of SharePoint.
It's not a Python solution, but I can map the SharePoint site as a network drive letter and copy the files with
xcopy /*.msr . /z
I learned a little messing with this (I hate GUI's :-( )
Thanks a lot; a life saver for me; have tried autoit and could not download using it. Issue solved.
|
http://pyright.blogspot.com/2014/08/internet-explorer-9-save-dialog.html
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
Contents
1 World-Wide Web 7
1.1 WorldWideWeb - Summary : : : : : : : : : : : : : : : : : : 7
1.2 WWW people : : : : : : : : : : : : : : : : : : : : : : : : : : 9
1.2.1 Eelco van Asperen : : : : : : : : : : : : : : : : : : : 10
1.2.2 Carl Barker : : : : : : : : : : : : : : : : : : : : : : : 10
1.2.3 Tim Berners-Lee : : : : : : : : : : : : : : : : : : : 10
1.2.4 Robert Cailliau : : : : : : : : : : : : : : : : : : : : 10
1.2.5 Peter Dobberstein : : : : : : : : : : : : : : : : : : : 10
1.2.6 "Erwise" team : : : : : : : : : : : : : : : : : : : : 11
1.2.7 David Foster : : : : : : : : : : : : : : : : : : : : : : 11
1.2.8 Karin Gieselmann : : : : : : : : : : : : : : : : : : 11
1.2.9 Jean-Francois Groff : : : : : : : : : : : : : : : : : : 11
1.2.10 Willem von Leeuwen : : : : : : : : : : : : : : : : : : 11
1.2.11 Nicola Pellow : : : : : : : : : : : : : : : : : : : : : 11
1.2.12 Bernd Pollermann : : : : : : : : : : : : : : : : : : 12
1.2.13 Pei Wei : : : : : : : : : : : : : : : : : : : : : : : : 12
1.3 Policy : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 12
1.3.1 Aim : : : : : : : : : : : : : : : : : : : : : : : : : : : 12
1.3.2 Collaboration : : : : : : : : : : : : : : : : : : : : : : 12
1.3.3 Code distribution : : : : : : : : : : : : : : : : : : : : 13
1.3.4 WorldWideWeb distributed code : : : : : : : : : : : 13
1.3.5 Copyright CERN 1990-1992 : : : : : : : : : : : : : : 15
1.4 History to date : : : : : : : : : : : : : : : : : : : : : : : : : 16
2 How can I help? 19
2.1 Information Provider : : : : : : : : : : : : : : : : : : : : : : 20
2.1.1 You have a few files : : : : : : : : : : : : : : : : : : 20
2.1.2 You have a NeXT : : : : : : : : : : : : : : : : : : : 20
2.1.3 Using a shell script : : : : : : : : : : : : : : : : : : : 20
2.1.4 You have many files : : : : : : : : : : : : : : : : : : 20
2 CONTENTS
2.1.5 You have an existing information base : : : : : : : : 20
2.2 Etiquette : : : : : : : : : : : : : : : : : : : : : : : : : : : : 21
2.2.1 Sign it! : : : : : : : : : : : : : : : : : : : : : : : : : 21
2.2.2 Give the status of the information : : : : : : : : : : 22
2.2.3 Refer back : : : : : : : : : : : : : : : : : : : : : : : : 22
2.2.4 A root page for outsiders : : : : : : : : : : : : : : : 22
2.3 Things to be done : : : : : : : : : : : : : : : : : : : : : : : 22
2.3.1 Client side : : : : : : : : : : : : : : : : : : : : : : : : 22
2.3.2 Server side : : : : : : : : : : : : : : : : : : : : : : : 23
2.3.3 Other : : : : : : : : : : : : : : : : : : : : : : : : : : 23
3 Design Issues 25
3.1 Intended Uses : : : : : : : : : : : : : : : : : : : : : : : : : : 26
3.2 Availability on various platforms : : : : : : : : : : : : : : : 26
3.3 Navigational Techniques and Tools : : : : : : : : : : : : : : 27
3.3.1 Defined structure : : : : : : : : : : : : : : : : : : : : 27
3.3.2 Graphic Overview : : : : : : : : : : : : : : : : : : 27
3.3.3 History mechanism : : : : : : : : : : : : : : : : : : : 27
3.3.4 Index : : : : : : : : : : : : : : : : : : : : : : : : : 28
3.3.5 Node Names : : : : : : : : : : : : : : : : : : : : : : 29
3.3.6 Menu of links : : : : : : : : : : : : : : : : : : : : : : 29
3.3.7 Design Issues : : : : : : : : : : : : : : : : : : : : : : 29
3.3.8 Web of Indexes : : : : : : : : : : : : : : : : : : : : : 29
3.4 Tracing Links : : : : : : : : : : : : : : : : : : : : : : : : : : 31
3.5 Versioning : : : : : : : : : : : : : : : : : : : : : : : : : : : : 31
3.6 Multiuser considerations : : : : : : : : : : : : : : : : : : : : 32
3.6.1 Annotation : : : : : : : : : : : : : : : : : : : : : : 32
3.6.2 Protection : : : : : : : : : : : : : : : : : : : : : : : 32
3.6.3 Private overlaid web : : : : : : : : : : : : : : : : : 33
3.6.4 Locking and modifying : : : : : : : : : : : : : : : : : 33
3.6.5 Annotation : : : : : : : : : : : : : : : : : : : : : : : 33
3.7 Notification of new material : : : : : : : : : : : : : : : : : : 34
3.8 Topology : : : : : : : : : : : : : : : : : : : : : : : : : : : : 34
3.8.1 Are links two- or multi-ended? : : : : : : : : : : : : 34
3.8.2 Should the links be monodirectional or bidirectional? 34
3.8.3 Should anchors have more than one link? : : : : : 35
3.8.4 Should links be typed? : : : : : : : : : : : : : : : : 35
3.8.5 Should links contain ancillary information? : : : : : 36
3.8.6 Should a link contain Preview information? : : : : : 36
3.9 Link Types : : : : : : : : : : : : : : : : : : : : : : : : : : : 36
3.9.1 Magic link types : : : : : : : : : : : : : : : : : : : : 36
3.10 Document Naming : : : : : : : : : : : : : : : : : : : : : : : 37
CONTENTS 3
3.10.1 Name or Address, or Identifier? : : : : : : : : : : : : 38
3.10.2 Hints : : : : : : : : : : : : : : : : : : : : : : : : : : 38
3.10.3 X500 : : : : : : : : : : : : : : : : : : : : : : : : : : : 39
3.11 Document formats : : : : : : : : : : : : : : : : : : : : : : : 39
3.11.1 Format negotiation : : : : : : : : : : : : : : : : : : 39
3.11.2 Examples : : : : : : : : : : : : : : : : : : : : : : : : 40
3.12 Design Issues : : : : : : : : : : : : : : : : : : : : : : : : : : 41
3.13 Document caching : : : : : : : : : : : : : : : : : : : : : : : 41
3.13.1 Expiry date : : : : : : : : : : : : : : : : : : : : : : 42
3.14 Scott Preece on retrieval : : : : : : : : : : : : : : : : : : : : 42
3.15 Design Issues : : : : : : : : : : : : : : : : : : : : : : : : : : 43
4 Relevant protocols 45
4.1 File Transfer : : : : : : : : : : : : : : : : : : : : : : : : : : 45
4.2 Network News : : : : : : : : : : : : : : : : : : : : : : : : : 45
4.3 Search and Retrieve : : : : : : : : : : : : : : : : : : : : : : 46
4.4 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 46
4.5 HTTP as implemented in WWW : : : : : : : : : : : : : : : 46
4.5.1 Connection : : : : : : : : : : : : : : : : : : : : : : : 46
4.5.2 Request : : : : : : : : : : : : : : : : : : : : : : : : : 46
4.5.3 Response : : : : : : : : : : : : : : : : : : : : : : : : 47
4.5.4 Disconnection : : : : : : : : : : : : : : : : : : : : : : 47
4.6 HyperText Transfer Protocol : : : : : : : : : : : : : : : : : 48
4.6.1 Underlying protocol : : : : : : : : : : : : : : : : : : 48
4.6.2 Idempotent ? : : : : : : : : : : : : : : : : : : : : : : 48
4.6.3 Request: Information transferred from client : : : : 49
4.6.4 Response : : : : : : : : : : : : : : : : : : : : : : : : 51
4.6.5 Status codes : : : : : : : : : : : : : : : : : : : : : : 51
4.6.6 Penalties : : : : : : : : : : : : : : : : : : : : : : : : 52
4.7 Why a new protocol? : : : : : : : : : : : : : : : : : : : : : : 53
5 W3 Naming Schemes 55
5.1 Examples : : : : : : : : : : : : : : : : : : : : : : : : : : : : 55
5.2 Naming sub-schemes : : : : : : : : : : : : : : : : : : : : : : 56
5.3 Address for an index Search : : : : : : : : : : : : : : : : : : 57
5.3.1 Example: : : : : : : : : : : : : : : : : : : : : : : : : 57
5.4 W3 addresses of files : : : : : : : : : : : : : : : : : : : : : : 57
5.4.1 Examples : : : : : : : : : : : : : : : : : : : : : : : : 58
5.4.2 Improvements : Directory access : : : : : : : : : : : 58
5.5 Hypertext address for net News : : : : : : : : : : : : : : : : 58
5.5.1 Examples : : : : : : : : : : : : : : : : : : : : : : : : 59
5.6 Relative naming : : : : : : : : : : : : : : : : : : : : : : : : 59
4 CONTENTS
5.7 HTTP Addressing : : : : : : : : : : : : : : : : : : : : : : : 60
5.8 Telnet addressing : : : : : : : : : : : : : : : : : : : : : : : : 61
5.9 W3 address syntax: BNF : : : : : : : : : : : : : : : : : : : 62
5.10 Escaping illegal characters : : : : : : : : : : : : : : : : : : : 64
5.11 Gopher addressing : : : : : : : : : : : : : : : : : : : : : : : 64
5.12 W3 addresses for WAIS servers : : : : : : : : : : : : : : : : 66
6 HTML 67
6.1 Default text : : : : : : : : : : : : : : : : : : : : : : : : : : : 67
6.2 HTML Tags : : : : : : : : : : : : : : : : : : : : : : : : : : : 68
6.2.1 Title : : : : : : : : : : : : : : : : : : : : : : : : : : 68
6.2.2 Next ID : : : : : : : : : : : : : : : : : : : : : : : : : 68
6.2.3 Base Address : : : : : : : : : : : : : : : : : : : : : 69
6.2.4 Anchors : : : : : : : : : : : : : : : : : : : : : : : : 69
6.2.5 IsIndex : : : : : : : : : : : : : : : : : : : : : : : : : 70
6.2.6 Plaintext : : : : : : : : : : : : : : : : : : : : : : : 70
6.2.7 Example sections : : : : : : : : : : : : : : : : : : : 70
6.2.8 Paragraph : : : : : : : : : : : : : : : : : : : : : : : : 71
6.2.9 Headings : : : : : : : : : : : : : : : : : : : : : : : : 71
6.2.10 Address : : : : : : : : : : : : : : : : : : : : : : : : 72
6.2.11 Highlighting : : : : : : : : : : : : : : : : : : : : : : : 72
6.2.12 Glossaries : : : : : : : : : : : : : : : : : : : : : : : : 72
6.2.13 Lists : : : : : : : : : : : : : : : : : : : : : : : : : : : 72
6.3 SGML : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 73
6.3.1 High level markup : : : : : : : : : : : : : : : : : : : 73
6.3.2 Syntax : : : : : : : : : : : : : : : : : : : : : : : : : : 74
6.3.3 Tools : : : : : : : : : : : : : : : : : : : : : : : : : : 74
6.3.4 AAP : : : : : : : : : : : : : : : : : : : : : : : : : : : 74
7 Coding Style Guide 75
7.1 Language features : : : : : : : : : : : : : : : : : : : : : : : 75
7.2 Module Header : : : : : : : : : : : : : : : : : : : : : : : : : 76
7.3 Function Headings : : : : : : : : : : : : : : : : : : : : : : : 77
7.3.1 Format : : : : : : : : : : : : : : : : : : : : : : : : : 77
7.3.2 Entry and exit condidtions : : : : : : : : : : : : : : 78
7.3.3 Function Heading: dummy example : : : : : : : : : 78
7.4 Function body layout : : : : : : : : : : : : : : : : : : : : : : 79
7.4.1 Indentation : : : : : : : : : : : : : : : : : : : : : : : 79
7.5 Identifiers : : : : : : : : : : : : : : : : : : : : : : : : : : : : 79
7.6 Directory structure : : : : : : : : : : : : : : : : : : : : : : : 80
7.7 Include Files : : : : : : : : : : : : : : : : : : : : : : : : : : 80
7.7.1 Module include files : : : : : : : : : : : : : : : : : : 80
CONTENTS 5
7.7.2 Common include files : : : : : : : : : : : : : : : : : 81
6 CONTENTS
Chapter 1
World-Wide Web
Documentation on the World Wide Web is normally picked up by browsing with a W3 browser. If you need a printed copy, it is also available in "laTeX" and "Postscript" formats by anonymous FTP from node info.cern.ch.
This document introduces the "World Wide Web Book", a paper document derived from the hypertext about the project. The book contains
ffl General informaion about the project, people and history;
ffl A list of things to be done, including how YOU can put data onto the web;
ffl A technical discussion of the design issues in projects such as WWW;
ffl Actual details of the implementation of the WWW project
ffl Such low-level details as software architecures and coding standards
The text of the book has been automatically generated from the hypertext,
so it may seem strange in places due to links in the hypertext which are
not there in the printed copy.
The authors of the material are general members of the W3 team at CERN, except where otherwise noted.
1.1 WorldWideWeb - Summary
The WWW project merges the techniques of information retrieval and hypertext to make an easy but powerful global information system.
The project is based on the philosophy that much academic information should be freely available to anyone. It aims to allow information sharing
8 CHAPTER 1. WORLD-WIDE WEB
within internationally dispersed teams, and Univerity of Graz's "Hyper-G", and Thinking Machine's "W.A.I.S." systems.
The WWW model gets over the frustrating incompatibilities of data format between suppliers and reader by allowing negotiation of format between a smart browser and a smart server. This should provide a basis for
1.2. WWW PEOPLE 9
You can try the simple line mode browser by telnetting to info.cern.ch with user name "www" (no password). You can also find out more about WWW in this way.
It is much more efficient to install the browser on your own machine. The line mode browser is currently available in source form by anonymous FTP from node info.cern.ch [currently 128.141.201.74] as
/pub/WWWLineMode_v.vv.tar.Z.
(v.vv is the version number - take the latest.) <P> Also available is a hypertext editor for the NeXT using the NeXTStep graphical user interface in file
/pub/WWWNeXTStepEditor_v.vv.tar.Z
and a skeleton server daemon, available as
/pub/WWWDaemon_v.vv.tar.Z
Documentation is readable using www. A plain text version of the installation
instructions is included in the tar file.
Tim BL
1.2 WWW people
This is a list of some of those who have contributed to the WWW project , and whose work is linked into this web. Unless otherwise stated they are at CERN, Phone +41(22)767 plus the extension given below. Address: 1211 Geneva 23, Switzerland.
10 CHAPTER 1. WORLD-WIDE WEB
1.2.1 Eelco van Asperen
Ported the line-mode browser the PC under PC-NFS; developped a curses version. Email: evas@cs.few.eur.nl.
1.2.2 Carl Barker
Carl is at CERN for a six month period during his degree course at Brunell Univerity, UK. Carl will be working on the server sode, possibly on client authentication. Tel: 8265. Email: barker@cernnext.cern.ch
1.2.3 Tim Berners-Lee
Currently in CN division. Before comming to CERN, Tim worked on, among other things, document production and text processing. He developped. Phone: 3755, Email: timbl@info.cern.ch
1.2.4 Robert Cailliau
Currently in ECP division, Programming Techniques group. Robert has been interested in document production since 1975. He ran the Office Computing Systems group from 87 to 89. He is a long-time user of Hypercard, which he used to such diverse ends as writing trip reports, games, bookkeeping software, and budget preparation forms. Robert is contributing browser software for the Macintosh platform, and will be analysing the needs of physics experiments for online data access. Phone: 5005(office), 4646 (Lab). Email: cailliau@cernvm.cern.ch
1.2.5 Peter Dobberstein
While at the DESY lab in Hamburg (DE), Peter did the port of the linemode browser onto MVS and, indirectly, VM/CMS. These were the most difficult of the ports to date. He also overcame many incidental problems in making a large amount of information in the DESY database available.
1.2. WWW PEOPLE 11
1.2.6 "Erwise" team
Kim Nyberg, Teemu Rantanen, Kati Suominen and Kari Sydfnmaanlakka ('f'.
1.2.7 David Foster
With wide experience in networking, and a current conviction information systems and PC/Windows being the way of the future, Dave is having a go at a MS-Windows berowser/editor. Dave also has a strong interest in server technology and intelligent information retrieval algorithms.
1.2.8 Karin Gieselmann
With experience as librarian of the "FIND" database on cernvm, interfacing with authors and readers, Karin has volunteered to be a "hyperlibrarian" and look after the content of hypertext databases. Email: Karin@cernvm.cern.ch
1.2.9 Jean-Francois Groff
Provided some useful input in the "design issues". Currently in ECP/DS as "cooperant", J-F joined the project in September 1991. He wrote the gateway to the VMS Help system , and is looking at new browsers (X- Windows, emacs) and integration of new data sources. Jean-Francois is also working on the underlying browser architecure. Phone: 3755, Email: jfg@cernvax.cern.ch
1.2.10 Willem von Leeuwen
at NIKHEF, WIllem put up many servers and has provided much useful feedback about the w3 browser code.
1.2.11 Nicola Pellow
Nicola joined the project in November 1990. She is a student at Leicester Polytechnic, UK, and left CERN at the end of August 1991. She wrote the original line mode browser .
12 CHAPTER 1. WORLD-WIDE WEB
1.2.12 Bernd Pollermann
Bernd is responsible for the "XFIND" indexes on the CERNVM node, for their operation and, largely, their contents. He is also the editor of the Computer Newsletter (CNL), and has experience in managing large databases of information. Bernd is in the AS group of CN division. He has contributed code for the FIND server which allows hypertext access to this large store of information. Phone: 2407 Office: 513-1-16
1.2.13 Pei Wei
Pei is the author of "Viola", a hypertext browser, and the ViolaWWW variant which is a WWW browser. He is at the University of Califorfornia
1.3 Policy
This outlines the policy of the W3 project at CERN.
1.3.1 Aim a very wide range of information of all types should be available as widely as possible.
1.3.2 Collaboration
We encourage collaboration by academic or commercial parties. There are always many things to be done, ports to be made to different environments, new browsers to be write, and additional data to be incorporated into the "web". We have already been fortunate enough to have several contributions in these terms, and also with hardware support from manufacturers. If you may be interested in extending the web or the software, please mail or phone us.
1.3. POLICY 13
1.3.3 Code distribution
Code written at CERN is covered by the CERN copyright. In practice the interpretation of this in the case of the W3 project is that the programs are freely available to academic bodies. To commercial organizations who are not reselling it, but are using it to participate in global information exchange, the charge is generally waived in order to cut administrative costs. Code is of course shared freely with all collaborators. Commercial organizations wishing to sell software based on W3 code should contact CERN.
Where CERN code is based on public domain code, that code is also public domain.
Code not originating at CERN is of course covered by terms set by the copyright holder involved.
Tim BL
1.3.4 WorldWideWeb distributed code
See the CERN copyright . This is the README file which you get when you unwrap one of our tar files. These files contain information about hypertext, hypertext systems, and the WorldWideWeb project. If you have taken this with a .tar file, you will have only a subset of the files.
Archive Directory structure
Under /pub/www, besides this README file, you'll find bin, src and doc directories. The main archives are as follows
src/WWWLineMode v.vv.tar.Z The Line mode browser - all source,
and binaries for selected systems.
WWWLineModeDefaults.tar.Z A subset of WWWLineMode v.vv.tar.Z. Basic documentation, and our current home page.
src/WWWNextStepEditor.tar.Z The Hypertext Browser/editor for the NeXT { source and binary.
src/WWWDaemon v.vv.tar.Z The HTTP daemon, and WWW-WAIS gateway programs.
doc/WWWBook.tar.Z A snapshot of our internal documentation - we prefer you to access this on line { see warnings
below.
bin/xxx/www Executable binaries for system xxx
14 CHAPTER 1. WORLD-WIDE WEB
Generated Directory structure
The tar files are all designed to be unwrapped in the same (this) directory. They create different parts of a common directory tree under that directory. There may be some duplication. They also generate a few files in this directory: README.*, Copyright.*, and some installation instructions (.txt).
NeXTStep Browser/Editor
The browser for the NeXT is those files contained in the application directory WWW/Next/Implementation/WorldWideWeb.appand is compiled.Whe you install the app, you may want to configure the default page, WorldWideWeb.app/default.html. These must point to some useful information! You should keep it up to date with pointers to info on your site and elsewhere. If you use the CERN home page note there is a link at the bottom to the master copy on our server.
Line Mode browser
Binaries of this for some systems are in subdirectories of /pub/www/bin. If the binary exists for your system, take that and also the /pub/www/WWWLineModeDefaults.tar.Z. Unwrap the documentation, and put (link) its directory into /usr/local/lib/WWW on your machine. Put the www executable into your path somewhere, and away you go.
If no binary exists, procede as follows. Take the source tar file WWW- LineMode v.vv.tar.Z , uncompress and untar it. You will then find the line Mode browser in WWW/LineMode/Implementation/... (See Installation notes )
Subdirectories to that directory contain Makefiles for systems to which we have already ported. If your system is not among them, make a new subdirectory with the system name, and copy the Makefile from an existing one. Change the directory names as needed. PLEASE INFORM US OF THE CHANGES WHEN YOU HAVE DONE THE PORT. This is a condition of your use of this code, and will save others repeating your work, and save you repeating it in future releases.
Whe you install the browsers, you may want to configure the default page. This is /usr/local/lib/WWW/default.html for the line mode browser. This must point to some useful information! You should keep it up to date with pointers to info on your site and elsewhere. If you use the CERN home page note there is a link at the bottom to the master copy on our server.
Some basic documentation on the browser is delivered with the home page in the directory WWW/LineMode/Defaults. A separate tar file of that
1.3. POLICY 15
directory (WWWLineModeDefaults.tar.Z) is available if you just want to update that.
The rest of the documentation is in hypertext, and so wil be readable most easily with a browser. We suggest that after installing the browser, you browse through the basic documentation so that you are aware of the options and customisation possibilities for example.
Documentation
The archive /pub/www/doc/WWWBook.tar.Z is an extract of the text from the WorldWideWeb (WWW) project documentation.
This is a snapshot of a changing hypertext system. The text is provided as example hypertext only, not for general distribution. The accuracy of any information is not guaranteed, and no responsibility will be accepted by the authors for any loss or damage due to inaccuracy or omission. A copy of the documentation is inevitably out of date, and may be inconsistent. There are links to information which is not provided in that tar file. If any of these facts cause a problem, you should access the original master data over the network using www, or mail us.
Servers
The Daemon tar file contains (in this release) the code for the basic HTTP daemon for serving files, and also for the WWW-WAIS gateway. To compile the WAIS gateway, you will need [a link to] a WAIS distribution at the same level as the WWW directory.
General
Your comments will of course be most appreciated, on code, or information
on the web which is out of date or misleading. If you write your own
hypertext and make it available by anonymous ftp or using a server, tell
us and we'll put some pointers to it in ours. Thus spreads the web... Tim
Berners-Lee
WorldWideWeb project
CERN, 1211 Geneva 23, Switzerland
Tel: +41 22 767 3755; Fax: +41 22 767 7155; email: timbl@info.cern.ch
1.3.5 Copyright CERN 1990-1992
The information (of all forms) in these directories is the intellectual property of the European Particle Physics Laboratory (known as CERN). It
16 CHAPTER 1. WORLD-WIDE WEB
is freely availble for non-commercial use in collaborating non-military academic institutes. Commercial organisations wishing to use this code should apply to CERN for conditions. Any modifications, improvements or extensions made to this code, or ports to other systems, must be made available under the same terms.
No guarantee whatsoever is provided by CERN. No liability whatsoever is accepted for any loss or damage of any kind resulting from any defect or inaccuracy in this information or code.
Tim Berners-Lee
CERN
1211 Geneva 23, Switzerland Tel +41(22)767 3755, Fax +41(22)767
7155, Email: tbl@cernvax.cern.ch
Tim BL
1.4 History to date
A few steps to date in the WWW project history are as follows:
March 1989 First project proposal written and circulated
for comment (TBL) . Paper "HyperText and
CERN" (in ASCII or WriteNow format) produced
as background.
October 1990 Project proposal reformulated with encouragement from CN and ECP divisional management.
RC is co-author.
November 1990 Initial WorldWideWeb prototype developped on the NeXT (TBL) .
November 1990 Nicola Pellow joins and starts work on the linemode browser . Bernd Pollermann helps get
interface to CERNVM "FIND" index running.
TBL gives a colloquium on hypertext in general.
Christmas 1990 Line mode and NeXTStep browsers demonstrable. Acces is possible to hypertext files, CERNVM
"FIND", and internet news articles.
Febraury 1991 workplan for the purposes of ECP division. 26 February 1991 Presentation of the project to the ECP group. March 1991 Line mode browser (www) released to limited audience on priam vax, rs6000, sun4.
May 1991 Workplan produced for CN/AS group
1.4. HISTORY TO DATE 17
17 May 1991 Presentation to C5 committee. General release
of www on central CERN machines.
12 June 1991 CERN Computer Seminar on WWW August 1991 Files available on the net, posted on alt.hypertext (6, 16, 19th Aug), comp.sys.next (20th), comp.text.sgml
and comp.mail.multi-media (22nd).
October 1991 VMS/HELPand WAIS gateways installed. Mailing lists www-interest and www-talk@info.cern.ch
mailing lists started. One year status report.
Anonymous telnet service started.
December 1991 Presented poster and demonstration at HT91 . W3 browser installed on VM/CMS. CERN
computer newsletter announces W3 to the HEP
world.
15 January 1992 Line mode browser release 1.1 available by anonymous FTP. See news . Presentation to AI-
HEP'92 at La Londe.
12 February 1992 Line mode v 1.2 annouced on alt.hypertext, comp.infosystems, comp.mail.multi-media, cern.sting,
comp.archives.admin, and mailing lists.
18 CHAPTER 1. WORLD-WIDE WEB
Chapter 2
How can I help?
There are lots of ways you can help if you are interested in seeing the web grow and be even more useful...
Put up some data There are many ways of doing this. The web
needs both raw data { fresh hypertext or old
plain text files, or smart servers giving views of
existing databases. See more details , etiquette
.
Suggest someone else does Maybe you know a system which it would be neat to have on the web. How about suggesting
to the person involved that they put up
a W3 server?
Manage a subject area If you know something of what's going on in a particular field, organization or country, would
you like to keep up-to-date an overview of online
data?
Write some software We have a big list of things to be done. Help yourself { all contributions gatefully received!
see the list .
Send us suggestions We love to get mail... www-bug@info.cern.ch Tell your friends Install/get installed the client software on your site. Quote things by their UDI to allow w3
users to pick them straight up.
Tim BL
20 CHAPTER 2. HOW CAN I HELP?
2.1 Information Provider
There are many ways of making your new or existing data available on the "web" . The best method depends on what sort of data you have. (If you have any questions, mail the www team at www-bug@info.cern.ch.). See also: Web etiquette . How can I help?
2.1.1 You have a few files
If you have some plain text files then you can easily write a small hypertext file which points to them. To make them accessible you can use either anonymous FTP , or the HTTP daemon .
2.1.2 You have a NeXT
You can use our prototype hypertext editor to create a web of hypertext, linking it to existing files. This is not YET available for X11 workstations. This is a fast way of making online documentation, as well as performing the hyper-librarian job of making sure all your information can be found.
2.1.3 Using a shell script
An HTTP daemon is such a simple thing that a simple shell script will often suffice. This is great for bits of information available locally through other programs, which you would like to publish. More details ...
2.1.4 You have many files
In this case, for speed of access, the HTTP daemon will probably be best. You can write a tree of hypertext in HTML linking the text files, or you can even generate the tree automatically from your directory tree. If you want to generate a full-text index, then you could use the public domain WAIS software - your data will then be accessible (as plain text, not hypertext) through the WAIS gateway .
2.1.5 You have an existing information base
If you have a maintained base of information, don't rush into changing the way you manage it. A "gateway" W3 server can run on top of your existing system, making the information in it available to the world. This is how it works:
ffl Menus map onto sets of hypertext links
2.2. ETIQUETTE 21
ffl Different search options map onto different "index" document addresses (even if they use the same index underneath in your system).
ffl Procedures used by those who contribute and manage information stay unaltered.
If your database is WAIS, VMS/HELP, XFIND, or Hyper-G, a gateway
exists already. These gateway servers did not take long to write. You can
pick up a skeleton server in C from our distribution . You can also write
one from scratch, for example in perl. An advantage of a gateway is that
you can maintain your existing procedures for creating text and managing
the database. See: Tips for server writers.
Tim BL
2.2 Etiquette
There are a few conventions which will make for a more useable, less confusing, web.
2.2.1 Sign it!
An important aspect of information which helps keep it up to date is that one can trace its author. Doing this with hypertext is easy { all you have to do is put a link to a page about the author (or simply to the author's phone book entry).
Make a page for yourself with your mail address and phone number. At the bottom of files for which you are responsible, put a small note { say just your initials { and link it to that page. The address style (right justified) is useful for this.
Your author page is also a convenient place to put and disclaimers, copyright noitices, etc which law or convention require. It saves cluttering up the mesages themselves with a long signature.
If you are using the NeXT hypertext editor, then you can put this link from your default blank page so that it turns up on the bottom of each new document.
2.2.2 Give the status of the information
Some
22 CHAPTER 2. HOW CAN I HELP?
complete? What is its scope? For a phone book for example, what set of people are in it?
2.2.3 Refer back
You may create some data as part of an information tree, but others may may make links to it from other places. Don't make assumptions about what people have just read. Make links from your data back to more general data, so that if people have jumped in there, and at first they don't undertand what it's all about, they can pick up some background information to get the context.
2.2.4 A root page for outsiders. I suggest you put a "map" line into your daemon rule file to map the document name "/" onto such a document. As well as a summary of what is available at your host, pointers to related hosts are a good idea. Tim BL
2.3 Things to be done
There are many of these ...if you have amoment, take your pick! There are also special lists of things to do for the line mode browser , the NeXT browser , and the daemon .
2.3.1 Client side
More clients Clients exist for many platforms, but not all.
Editors only exist on the NeXT, but will be
really useful for sourcing info and group work.
(Group editor?)
Search engines Now the web of data and indexes exists, some really smart intelligent algorithms ("knowbots?")
could run on it. Recursive index and link tracing,
Just think...
Text from hypertext We need a quick way to print a book from the web. (html to tex?)
2.3. THINGS TO BE DONE 23
2.3.2 Server side
Server upgrade Easier to install, porrt. Export directories as
hypertext. Run shell scripts embedded in the
directory for virtual documents and searches.
More Servers See the list of things we have thought of or been pointed to.
WAIS integration WAIS protocol extensions tro allow hypertext; HTML data type, docids to be conforming UDIs.
Integrate WAIS in client too.
Integrate client and server A client which generates HTML becomes a general purpose gateway. Especially useful
for sites where general access to news, external
internet, etc, is limited.
2.3.3 Other
Mail server Update listserv to supply www documents from
UDI, including at the bottom a list of references
with their UDIs.
Gateways JANET and DECnet for example. Real need. HTTP enhancements Format conversion, authorization, better logging information for statistics.
Tim BL
24 CHAPTER 2. HOW CAN I HELP?
Chapter 3
Design Issues
This lists decisions to be made in the design or selection of a hypermedia
information system. It assumes familiarity with the concept of hypertext.
A summary of the uses of hypertext systems is followed by a list of features
which may or may not be available. Some of the points appear in the
Comms ACM July 88 articles on various hypertext systems. Some points
were discussed also at ECHT90 . Tentative answers to some design decisions
from the CERN perspective are included.
Here are the criteria and features to be considered:
ffl Intended uses of the system.
ffl Availability on which platforms?
ffl Navigational techniques and tools: browsing, indexing, maps etc
ffl Keeping track of previous versions of nodes and their relationships
ffl Multiuser access: protection, editing and locking, annotation.
ffl Notifying readers of new material available
ffl The topology of the web of links
ffl The types of links which can express different relationships between nodes
These are the three important issues which require agreement betwen systems which can work together
ffl Naming and Addressing of documents
26 CHAPTER 3. DESIGN ISSUES
ffl Protocols
ffl The format in which node content is stored and transferred
ffl Implementation and optimisation - Caching , smart browsers, knowbots etc., format conversion , gateways.
3.1 Intended Uses
Here are some of the many areas in which hypertext is used. Each area has its specific requirements in the way of features required.
ffl General reference data - encyclopaedia, etc.
ffl Completely centralized publishing - online help, documentation, tutorial etc
ffl More or less centralized dissemination of news which has a limited life
ffl Collaborative authoring
ffl Collaborative design of something other than the hypertext itself
ffl Personal notebook
The CERN requirement has a mixture of many of these uses, except that there is not a requirement for distribution of fixed hypertext on hard media such as optical disk. Evidently, the system will have to be networked, though databases may start life at least as personal notebooks. [The (paper) document "HyperText and CERN" describes the problem to be solved at CERN, and the requirements of a system which solves them.
3.2 Availability on various platforms
The system is to be available (at CERN) on many sorts of machine, but priorities must be decided. A list comprises:
ffl A unix or VMS workstation with X-windows
ffl An 80 character terminal attached to a unix or VMS machine, or an MSDOS PC
ffl An 80 character terminal attached to an IBM mainframe running VM/CMS
3.3. NAVIGATIONAL TECHNIQUES AND TOOLS 27
ffl A Macintosh
ffl A unix workstation with NextStep
ffl An MS-DOS PC with graphics
3.3 Navigational Techniques and Tools
TBL There are a number of ways of accessing the data one is looking for. Navigational access (i.e., following links) is the essence of hypertext, but this can be enhanced with a number of facilities to make life more efficient and less confusing.
3.3.1 Defined structure
It is sometimes nice for a reader to be able to reference a document structure
built specifically to enhance his understanding, by the document author.
This is especially important when the structure is part of the information
the author wishes to convery.
See a separate discussion of this point .
3.3.2 Graphic Overview
A Graphic overview is useful and could be built automatically. Should it be made by the author, server, browser or an independent daemon?.
3.3.3 History mechanism
This allows users to retrace their steps. typical functions provided can be interpreted in a hypertext web as follows:
28 CHAPTER 3. DESIGN ISSUES
Home Go to initial node
Back Go to the node visited before this one in chronological order. Modify the history to remove the
current node.
Next When the current node is one of several nodes linked to the back node, go to the next of those
nodes. Leave the Back node unchanged. Modify
the history to remove the current node and
replace it with the "next" (new current) node.
Previous When the current node is one of several nodes linked to the back node, go to the preceding
one of those nodes.
In many hypertext systems, a tree structure is forcibly imposed on the data, and these functions are interpreted only with respect to the links in the tree. However, the reader as he browses defines a tree, and it may be more relevant to him to use that tree as a basis for these functions. I would therefore suggest that an explicit tree structure not be enforced. .
3.3.4 Index
An Index helps new readers of a large database quickly find an obscure node. Keyword schemes I include in the general topic of indexes. The index must, like a graphic overview, be built either by the author, or automatically by one of the server, browser, or a daemon. The index entries may be taken from the titles, a keyword list, or the node content or a combination of these. Note that keywords, if they are specifically created rather than random words, map onto hypertext concept nodes, or nodes of special type keyword. It is interesting to establish an identity relationship between keywords in two different databases { this may lead a searcher from one database into another.
Index schemes are important but indexes or keywords should look like normal hypertext nodes. The particular special operation one can do with a good keyword index system which one can't do with a normal hypertext
3.3. NAVIGATIONAL TECHNIQUES AND TOOLS 29 ").
See also: HyperText and Information Retrieval
3.3.5 Node Names
These allow faster access if one knows the name. They allow people to give references to hypertext nodes in other documents, over the telephone, etc. This is very useful. However, in Notecards, where the naming of nodes was enforced, it was found that thinking up names for nodes was a bore for users. KMS thought that being able to jump to a named node was important. The node name allows a command line interface to be used to add new nodes.
I think that naming a node should be optional: perhaps by default the system could provide a number which can be used instead of a name.The system should certainly support the naming of nodes, and access by name.
3.3.6 Menu of links
Regular linkwise navigation may be done with hotspots (highlighted anchors) or may be done with a menu. It may be useful to have a menu of all the links from a given node as an alternative way of navigating. Enquire, for example, offers a menu of references as the only way
3.3.7 Web of Indexes
In WWW , an index is a document like any other. An index may be built to cover a certain domain of information. For example, at CERN there is a CERN computer center document index . There is a separate functional telephone book index . Indexes may be built by the original information provider, or by a third party as a value-added service.
Indexes may point to other indexes. An index search on one index may turn up another index in the result hit list. In this case, the following algorithm seems appropriate.
30 CHAPTER 3. DESIGN ISSUES
Index context
Most index searches nowadays, though some look like intelligent semantically aware searches, are basically associative keyword searches. That is, a document matches a search if there is a large correlation (with or without boolean operations) between the set of words it or its abstract contains and the set of words specified in the search. Let us consider extending these searches to linked indexes.".
Context narrowing
Suppose we search a general physics index with the keywords "CERN NEWSLETTER". That index may contain an entry with keyword "CERN" pointing to the CERN index. Therefore, a search on the first index will turn up the CERN index. We should then search the CERN index, but looking only for the keyword "NEWSLETTER". The keyword "CERN" is discarded, as it is assumed by the new context. In this simple model, we can assume that the contextwords could be used directly as the keywords for the index itself..
3.4. TRACING LINKS 31
Context Broadening
We have discussed here only a narrowing of context, not a broadening.
One can imagine also a reference to a broader context index. In this case,
perhaps one should add to the search some keywords which come from the
original context but were not expressed. This would be dangerous, and
people would not like it as they often feel that they are expressing their
request in absolute terms even when they are not. Also, they may have
been trying to escape from too restricing a context.
One should also consider a search which traces hypertext links as well as using indexes.
See also: Navigational techniques , Hypertext and IR ,
Tim BL
3.4 Tracing Links
A form of search in a hypertext base involves tracing the links between given nodes. For example, to find a module suitable for connecting a decstation to SCSI, one might try finding paths between a document on decstations and a document on SCSI. This is similar to relevance feedback in index searching.
Tracing is made more powerful by using typed links . In that case, one
could perform semantic searches for all document written by people who
were part of the same organisation as the author of this one, for example.
This can use node typing as well.
When using link tracing, documents take over from keywords. See Scott Preece's vision .
Tim BL
3.5 Versioning
Definition: The storage and management of previous copies of a piece of
information, for security, diagnostics, and interest.
Do you want version control?
Can you reference a version only?
If you refer to a particular place in a node, how does one follow it in a new version, if that place ceases to exist?
(Peter Aiken is the expert in this area - Tim Oren, Apple)
Yes, at CERNwe will want versioning. Very often one wants to correct a news item, even one of limited life, without reissuing it. This is a problem with VAX/NOTES for example. I would suggest that the text for the
32 CHAPTER 3. DESIGN ISSUES
3.6 Multiuser considerations.
3.6.1 Annotation
Annotation is the linking of a new commentary node to someone else's existing node. It is the essence of a collaborative hypertext. An annotation does not modify the text necessarily: one can separate protection against writing and annotation.
3.6.2 Protection
Protection against unauthorized reading and writing is provided by servers. We use the word domain.
3.6. MULTIUSER CONSIDERATIONS 33.
3.6.3 Private overlaid web
3.6.4 Locking and modifying:
ffl One can write-protect the file temporarily. This unfortunately levaes no clue as to who has locked it, when and why. It is also indistinguishable from a genuine protection to a document which should not be modified
ffl One can create a lock file containing information about who/when/why, whose name is derived from the name of the file in question.
3.6.5 Annotation
Annotation is the linking of a new commentary node to someone else's existing node. It is the essence of a collaborative hypertext.
34 CHAPTER 3. DESIGN ISSUES
3.7 Notification of new material
Does one need to bring it to a reader's attention when new unread material is added?
ffl Asynchronously (e.g. by mail) when the update is made?
ffl Synchronously when he browses or starts the application?
ffl Under the control of the modifying author? (i.e. can I say whether my change is a notifiable change? - Yes)
How do you express interest - in a domain, in a node, in things near a node, in anything you have read already, etc? A separate web which is stored locally, and logically overlay the public web?
There.
3.8 Topology
Here are a few questions about the underlying connectivity of a hypertext web.
3.8.1 Are links two- or multi-ended?
The term "link" normally indeicates with two ends. Variations of this are liks with multiple sources and/or multiple destinations, and constructs which relate more than two anchors. The latter map onto logic description systems, predicate calculus, etc. See the "Aquanet" system from Xerox PARC - paper at HT91). This is a natural step from hypertext whose the links are typed with semantic content. For example, the relation "Document A is a basis for document B given argument C". From now on however, let us restrict ourselves to links in the conventional sense, that is, with two ends.
3.8.2 Should the links be monodirectional or bidirec-
tional?
If they are bidirectional, a link always exists in the reverse direction. A disadvantage of this being enforced is that it might constrain the author of
3.8. TOPOLOGY 35. This
is important when a critical parameter of the system is how long it takes
someone to create a link.
KMS and hypercard have one-way links; Enquire has two-way links. There is a question of how one can make a two-way link to a protected database. The automatic addition of the reverse link is very useful for enhancing the information content of the database. See also: Private overlaid web , Generic Links .
It may be useful to have bidirectional links from the point of view of managing data. For example: if a document is destroyed or moved, one is aware of what dangling links will be created, and can possibly fix them.
A compromise that links be one-way in the data model, but that a reverse link is created when any link is made, so long as this can be done without infringing protection. An alternative is for the reverse links to be gathered by a background process operating on a basically monodirectionally linked web.
3.8.3 Should anchors have more than one link?
There is a design issue in whether one anchor may lead to many links, and/or on link have many anchors. It seems reasonable for many anchors to lead to the same reference. If one source anchor leads to more than one destination anchor, then there will be ambiguity if the anchor is clicked on with a mouse. This could be resolved by providing a menu to the user, but I feel this would complicate it too much. I therefore suggest a many-to-one mapping. JFG disagrees and would like to see a small menu presented to the user if the link was ambiguous. Microcosm does this.
3.8.4 Should links be typed?
A typed link carries some semantic information, which allows the system to manage data more efficiently on behalf of the user. A default type ("untyped") normally exists in some form when types are implemented. See also a list of some types . (Should a link be allowed to have many types? (- JFG ) I don't think so: that should be represented by more than one link.(- TBL ))
36 CHAPTER 3. DESIGN ISSUES
Link typing helps with the generation of graphical overviews , and with automatic tracing .
3.8.5 Should links contain ancillary information?
Does the system allow dating, versioning, authorship, comment text on a link? If so, how is it displayed and accessed? This sort of information complicates the issue, in that readable information is no longer carried within node contents only. Pretty soon, following this path leads to a link becoming a node in itself, annotatable and all. This perverts the data model significantly, and I cannot see that that is a good idea. Information about the link can always be put in the source node, or in an intermediate node, for example an annotation. However, this makes tracing more difficult. It is certainly nice to be able to put a comment on a link. Perhaps one should make a link annotatable. I think not.
3.8.6 Should a link contain Preview information?
This is information stored at the source to allow the reader to check whether he wants to follow a link before he goes. I feel that the system may cache some data (such as the target node title), or the writer of the node may include some descriptive material in the highlighted spot, but it is not necessary to include preview information just because access may be slow. Caching should be done instead of corrupting the user interface. If you have a fast graphic overview , this could
3.9 Link Types
See discussion of whether links should be typed .
Descriptive (normal) link types are mainly for the benefit of users and
tracing, and graphics representation algorithms. Some link types for example
express relationships between the things described by two nodes.
A Is part of B / B includes A
A Made B / B is made by A
A Uses B / B is used by A
A refers to B / B is referred to by A
3.9.1 Magic link types
These have a significance known to the system, and may be treated in special ways. Many of these relate whole nodes, rather than particular
3.10. DOCUMENT NAMING 37
anchors within them. (See also multiended links and predicate logic) They might include:
Annotation
The information in the destination node is additional to that in the source
node, and may be viewed at the same time. It may be filtered out (as a
function of author?).
Annotation is used by one person to write the equivalent of "margin notes" or other criticism on another's document, for example. Tracing may ignore annotations when generating trees or sequences.
Embedded information
If this link is followed, the node at the end of it is embedded into the
display of the source node. This is supported by Guide, but not many
other systems. It is used, in effect, by those systems (VAX/notes under
Decwindows, Microsoft Word) which allow "Outlining" { expanding a tree
bit by bit.
The browser has a more difficult job to do if this is supported.
person described by node A is author of node B
This information can be used for protection, and informing authors of interest, for sending mail to authors, etc.
person described by node A is interested in node B
This information can be used for informing readers of changes.
Node A is in fact a previous version of node B
Node A is in fact a set of differences between B and its previous
version. This information will probably not be stored as nodes, but
3.10 Document Naming
This is probably the most crucial aspect of design and standardization in an open hypertext system. It concerns the syntax of a name by which a document or part of a document (an anchor) is referenced from anywhere else in the world.
38 CHAPTER 3. DESIGN ISSUES.
3.10.1 Name or Address, or Identifier?
Conventionally, a "name" has tended to mean a logical way of referring to an object in some abstract name space, while the term "address" has been used for something which specifies the physical location. The term "unique identifier" generally referred to a name which was guaranteed to be unique but had little significance as regards the logical name or physical address. A name server was used to convert names or unique identifiers into addresses..
3.10.2 Hints
Some document reference formats contain "hints" to the reader about the document, such as server availability, copyright status, last known physical address and data formats. It is very important not to confuse these with the document's name, as they have a shorter lifetime than the document.
3.11. DOCUMENT FORMATS 39
3.10.3 X500
The X500 directory service protocol defines an abstract name space which is hierarchical. It allows objects such as organizations, people, and documents to be arranged in a tree. Whereas the hierarchical structure might make it difficult to decide in which of two locations to put an object (it's not hypertext), this does allow a unique name to be given for anything in the tree. X500 functionally seems to meet the needs of the logical name space in a wide-area hypertext system. Implementations are somewhat rare at the moment of writing, so it cannot be assumed as a general infrastructure. If this direction is chosen for naming, it still leaves open the question of the format of the address into which a document name will be translated. This must also be left as open-ended as the set of protocols. Tim BL
3.11 Document formats
The question of the format of the contents of a node is independent of the format of all the management information (except for the format of the anchor position within the node content). Therefore, the hypertext system can be largely defined without specifying the node format. However, agreement must be reached between client and server about how they exchange content information. Many hypertext systems qualify as hypermedia systems because they handle media other than plain text. Examples are graphics, video and sound clips, object-oriented graphics definitions, marked-up text, etc.
3.11.1 Format negotiation
Most hypermedia systems on the market today have the same application program responsible for the hypertext navigation and for the browsing. It would be safer to separate these features as much as possible: otherwise, in defining a universal hypertext system, one is burdened with defining a universal multimedia browser. This would certainly not stand the test of time. Node content must be left free to evolve. This implies that format conversion facilities must be available to allow simple browsers to access data which is stored in a sophisticated format. Such conversion facilities tend to exist in many applications, though not, in general, in hypertext applications.
The format of the content of a node should be as flexible as possible. Having more than one format is not useful from the user's point of view {
40 CHAPTER 3. DESIGN ISSUES
only from the point of view of an evolving system. I suggest the following rules:
1. Basic formats
There is a set of formats which every client must be able to handle. These include 80-column text and basic hypertext ( HTML ).
2. Conversion
A server providing a format which is not in the basic set of formats required for a client must have the possibility of generating some sort of conversion of the text (even if necessary an apology for non-conversion in the case of graphics to text) for a client which cannot handle it. This ensures universal readability world over.
3. Negotiation
For every format, there must be a set of other possible formats which the server can convert it into, and the most desirable format is selected by negotiation between the two parties. The negotiation must take into account:
ffl the expected translation time, including current load factors
ffl the expected data degradation
ffl the expected transmission time (?!!)
The times one could assume will be roughly proportional to the length of the document, or at least linear in it..
3.11.2 Examples
Examples of rich text formats which exist already at CERN are as follows, with, in brackets after each, other formats into which it might be convertible:
ffl SGML ( Tex , Postscript, plain text)
3.12. DOCUMENT CACHING 41
ffl Bookmaster (Postscript, I3812, plain text)
ffl TeX (DVI, plain text)
ffl DVI
ffl Microsoft RTF (postscript,
ffl Postscript, Editable Postscript (IBM 3812 bitmap)
ffl plain text
When a server (or browser) is obliged to perform a conversion from one format to another, one imagines that the result would be cached so that, if the same conversion were needed later, it would be available more rapidly. Format conversion, like notification of new material, is something which can be triggered either by the writer or by the browser. In many cases, a conversion from, say, SGML into Postscript or plain text would be made immediately on entry of the new material, and kept until the source has been updated (See caching , design issues
3.12 Document caching
Three operations in the retrieval of a document may take significant time:
ffl Format conversion by the server, including version regeneration
ffl Data transmission across the network
ffl Format conversion by the browser
At each stage, the server (in the first case) or browser (in the other cases) may decide to keep a temporary copy of the result. This copy should ideally be common to many browsers.
Automatic
ffl expiry date
ffl file size
ffl time taken to get the file
ffl frequency of access
ffl time since access
42 CHAPTER 3. DESIGN ISSUES
3.12.1 Expiry date
As a guide to help a cache program optimise the data it caches, it is useful if a document is transmitted with an estimate by the server of the lengt of time the data may be kept for. This allows fast changing documents to be flushed from the system, preventing readers from being mislead. (I would not propose any notification of document changes to be distributed to cache managers automatically). For example, an RFC may be cached for years, while the state of the alarm system may be marked as valid for only one minute.
Window-oriented browsers effectively cache documents when they keep
several at a time in memory, in different windows. In this case, for very
volatile data, it may be useful to have the browser automatically refresh
the window when its data expires.
( design issues )
3.13 Scott Preece on retrieval
3 Oct 91 (See tracing , Navigation )
My
3.13. SCOTT PREECE ON RETRIEVAL 43
44 CHAPTER 3. DESIGN ISSUES
Chapter 4
Relevant protocols
The WorldWideWeb system can pick up information from many information sources, using existing protocols. Among these are file and news transfer protocols.
4.1 File Transfer ). See also the prospero project and the shift project, for more powerful file access systems.
4.2 Network News. (See news address syntax )
46 CHAPTER 4. RELEVANT PROTOCOLS
4.3 Search and Retrieve
The WWW project defines its own protocol for information transfer, which allows for negotiation on representation. This we call HTTP, for HyperText Transfer Protocol .See also HyperText Transfer Format , and the HTTP address syntax )
4.4
Whilst the HTTP protocol provides an index search function, another common protocol for index search is Z39.50, and the version of it
4.5 .
4.5.1oriented service. The interpretation of the protocol below in the case of a sequenced packet service (such as DECnet(TM) or ISO TP4) is that that the request should be one TPDU, but the repose may be many.
4.5.2 Request
The client sends a document request consisting of a line of ASCII characters terminated by a CR LF (carriage return, line feed) pair. A well-behaved server will not require the carriage return character.
4.5. HTTP AS IMPLEMENTED IN WWW 47 search functionality of the protocol lies in the ability of the addressing syntax to describe a search on a named index .
A search should only be requested by a client when the index document itself has been descibed as an index using the ISINDEX tag .
4.5.3 Response.
4.5.4
48 CHAPTER 4. RELEVANT PROTOCOLS
4.6 HyperText Transfer Protocol
See also: Why a new protocol? , Other protocols used
This is a list of the choices made and features needed in a hypertext transfer protocol. See also the HTTP protocol as currently implemented .
4.6.1 Underlying protocol
There are various distinct possible bases for the protocol - we can choose
ffl Something based on, and looking like, an Internet protocol. This has the advantage of being well understood, of existing implementations being all over the place. It also leaves open the possibility of a universal FTP/HTTP or NNTP/HTTP server. This is the case for the current HTTP.
ffl Something based on an RPC standard. This has the advantage of making it easy to generate the code, that the parsing of the messages is done automatically, and that the transfer of binary data is efficient. It has the disadvantage that one needs the RPC code to be available on all platforms. One would have to chose one (or more) styles of RPC. Another disadvantage may be that existing RPC systems are not efficient at transferring large quantities of text over a stream protocol unless (like DD-OC-RPC) one has a let-out and can access the socket directly.
ffl Something based on the OSI stack, as is Z39.50. This would have to be run over TCP in the internet world.
Current HTTP uses the first alternative, to make it simple to program, so that it will catch on: conversion to run over an OSI stack will be simple as the structure of the messages is well defined.
4.6.2 Idempotent ?
Another choice is whether to make the protocol idempotent or not. That is, does the server need to keep any state informat about the client? (For example, the NFS protocol is idempotent, but the FTP and NNTP protocols are not.) In the case of FTP the state information consists of authorisation, which is not trvial to establish every time but could be, and current directory and transfer mode which are basically trivial. The propsed protocol IS idempotent.
This causes, in principle, a problem when trying to map a non-dempotent system (such as library search systems which stored "result sets" on behalf
4.6. HYPERTEXT TRANSFER PROTOCOL 49
of the client) into the web. The problem is that to use them in an idempotent way requires the re-evaluation of the intermediate result sets at each query. This can be solved by the gateway intelligently caching result sets for a reasonable time.
4.6.3 Request: Information transferred from client
Parameters below, however represented on the network, are given in upper
case, with parameter names in lower case. This set assumes a model of
format negociation).
GET document name Please transfer a named document back. Transfer
the results back in a standard format or one
which I have said I can accept. The reply includes
the format. In practice, one may want
to transfer the document over the same link (a
la NNTP) or a different one (a la FTP). There
are advantages in each technique. The use of
the same link is standard, with moving to a
different link by negociation (see PORT ).
SEARCH keywords Please search the given index document for all items with the given word combination, and
transfer the results back as marked up hypertext.
This could elaborate to an SQL query.
There are many advantages in making the search
criterion just a subset of the document name
space.
SINCE datetime For a search, refer to documents only dated on or after this date. Used typically for building
a journal, or for incremental update of indexes
and maps of the web.
BEFORE datetime For a search, refer to documents before this dat only.
ACCEPT format penalty I can accept the given formats . The penalty is a set of numbers giving an estimate of the
data degradation and elapsed time penalty which
would be suffered at the CLIENT end by data
being received in this way. Gateways may add
50 CHAPTER 4. RELEVANT PROTOCOLS
or modify these fields.
PORT See the RFC959 PORT command. We could change the default so that if the port command
is NOT specified, then data must be sent back
down the same link. In an idempotent world,
this information would be included in the GET
command.
HEAD doc Like GET, but get only header information. One would have to decide whether the header
should be in SGML or in protocol format (e.g.
RPC parameters or internet mail header format).
The function of this would be to allow
overviews and simple indexes to be built without
having to retrieve the whole document. See
the RFC977 HEAD command. The process of
generation of the header of a document from
the source (if that is how it is derived) is subject
to the same possibilties (caching, etc) as a
format convertion from the source.
USER id The user name for logging purposes, preferably a mail address. Not foir authentication unless
no other authentication is given.
AUTHORITY authentication A string to be passed across transparently. The protocol is open to the authentication
system used.
HOST The calling host name - useful when the calling host is not properly registered with a name
server.
Client Software For interest only, the application name and version number of the client software. These values
should be preserved by gateways.
4.6.4 Response
Status A status is required in machine-readable format.
See the 3-figure status codes of FTP for
example. Bad status codes should be accompanied
by an explantory document, possible conianing
links to futher information. A possibility
would be to make an error response a special
SGML document type. Some special status
4.6. HYPERTEXT TRANSFER PROTOCOL 51
codes are mentioned below .
Format The format selected by the server Document The document in that format
4.6.5 Status codes
Success Accompanied by format and document.
Forward Accompanied by new address. The server indicates
a new address to be used by the client for
finding the document. the document may have
moved, or the server may be a name server.
Not Authorized The authorisation is not sufficient. Accompanied by the address prefix for which authorisation
is required. The browser should obtain
authoisation, and use it every time a request
is made for a document name matching that
prefix.
Bad document name The document name did not refer to a valid document.
Server failure Not the client's fault. Accompanied by a natural language explanation.
Not available now Temporary problem - trying at a later time might help. This does not i,ply anything about
the document name and authorisation being
valid. Accompaned by a natural language explaination.
Search fail Accompanied by a HTML hit-list without any hits, but possibly containing a natural explanation.
Tim BL
4.6.6 Penalties
There are two questions to consider when deciding on different possible transfer formats between servers and clients: Information degradation and elapsed time.
Degradation
When information is converted from one format to another, it may be degraded. For example, when a postscript file is rendered into bitmap, it
52 CHAPTER 4. RELEVANT PROTOCOLS
loses its potentially infinite resolution; when a TeX file is rendered into pure ASCII, it loses its structure and formatting.
This degradation is difficult to guess from simply the file type. and
for a given file it is quite subjective. Any attept to estimate a penalty
will therfore be very aproximate, and only useful for distinguishing widely
differing cases. A suitable unit would be the proportion, betwen and 1, of
the information which is not lost. Let's call it the degradation coefficient. One would hope that these coefficiemnts are multiplicative, that is that the process of converting a document into one format with degradation coeficient c1 and then further converting the result of that with coeficient c2 would in all be a process with coeffcient c1*c2. This is not, in fact, necessaily the case in practice but is a reasonable guess when we know no better.
Elapsed time
The elapsed time is another penalty of conversion. As an aproximation one might assume this to be linear in the size of the file. It is not easy to say whether the constant part or the size-proportional part is going to be the most important. The server, of course, knows the size of the file. It can in fact as a result of experience make improving guesses as to the conversion time. The conversion time will be a function also of local load. For particlular files, it may be affected by the caching of final or intermediate steps in a conversion process. Given a model in which the server makes the decision on the basis of information supplied by the client, this information could include, for each type, both the consant part (seconds) and the size-related part (seconds per byte).
4.7 Why a new protocol?
Existing protocols cover a number of different tasks.
ffl Mail protocols allow the transfer of transient messages from a single author to a small number of recipients, at the request of the author.
ffl File transfer protocols allow the transfer of data at the request of either the sender or receiver, but allow little processing of the data at the responding side.
ffl News protocols allow the broadcast of transient data to a wide audience.
4.7. WHY A NEW PROTOCOL? 53
ffl Search and Retrieve protocols allow index searches to be made, and allow document access. Few exist: Z39.50 is one and could be extended for our needs.
The protocol we need for information access ( HTTP ) must provide
ffl A subset of the file transfer functionality
ffl The ability to request an index search
ffl Automatic format negotiation.
ffl The ability to refer the client to another server
Tim BL
54 CHAPTER 4. RELEVANT PROTOCOLS
Chapter 5
W3 Naming Schemes
(See also: a discussion of design issues involved , BNF syntax , W3 background ) .
5.1 Examples
This is a fully qualified file name, referring to a document in the file name space of the given internet node, and an imaginary anchor 123 within it.
#greg
56 CHAPTER 5. W3 NAMING SCHEMES
This refers to anchor "greg" in the same document as that in which the name appears.
5.2.
wais Access is provided using the WAIS adaptaion of the Z39.50 protocol.
x500 Format to be defined.
5.3. ADDRESS FOR AN INDEX SEARCH 57
5.3given".
5.3.1 Example:
indicates the result of perfoming a search for keywords "sgml" and
5.
58 CHAPTER 5. W3 NAMING SCHEMES
5.4.1 Examples
This is a fully qualified file name.
fred.html
This <A NAME=0 HREF=Relative.html>relative name</A> , used within a file, will refer to a file of the same node and directory as that file, but the name fred.html.
5.4.2*.
5.5.
5.6. RELATIVE NAMING 59
5.5.1.
5.6.)
This implies that certain characters ("/", "..") have a significance reserved
for representing a hierarchical space, and must be recognized as such
by both clients and servers.
In the WWW address format , the rules for relative naming are:
ffl If the " scheme " parts are different, the whole absolute address must be given. Other wise, the scheme is omitted, and:
ffl.
ffl If the access and host parts are the same, then the path may be given with the unix convention, including the use of ".." to mean indicate deletion of a path element. Within the path:
60 CHAPTER 5. W3 NAMING SCHEMES
ffl If a leading slash is present, the path is absolute. Otherwise:
ffl The last part of the path of the base address (e.g. the filename of the current document) is removed, and the given relative address appended in its place.
ffl Within the result, all occurences "xxx/.." or "/." are recursively removed, where xxx is one path element (directory).
The use of the slash "/" and double dot ".." in this case must be respected by all servers. If necessary, this may mean converting their local representations in order that these characters should not appear
5.7
5.8. TELNET ADDRESSING 61
host), a service name is NOT an appropriate
way to specify a port number for a hypertext
address. If the port number is omitted the
preceding colon must also be omitted. In this
case, port number 2784 is assumed [This may
change!].
See also: WWW addressing in general , HTTP protocol .
Tim BL
5.8 is mandatory.
62 CHAPTER 5. W3 NAMING SCHEMES
address. If the port number is omitted the preceding
colon must also be omitted. In this case,
port number 23 is assumed.
Tim BL
5.9 difficult to read with the line mode browser.)
An absolute address specified in a link is an anchoraddress . The address which is passed to a server is a docaddress .
anchoraddress docaddress [ # anchor ]
docaddress httpaddress | fileaddress | newsaddress | telnetaddress | gopheraddress | waisaddress
httpaddress h t t p : / / hostport [ / path ] [ ? search ] fileaddress f i l e : / / host / path
newsaddress n e w s : groupart
waisaddress waisindex | waisdoc
waisindex w a i s : / / hostport / database [ ? search ] waisdoc w a i s : / / hostport / database / wtype / digits / path
groupart * | group | article
group ialpha [ . group ]
article xalphas @ host
database xalphas
wtype xalphas
telnetaddress t e l n e t : / / [ user @ ] hostport gopheraddress g o p h e r : / / hostport [/ gtype [ / selector ]
5.10. ESCAPING ILLEGAL CHARACTERS 63
] [ ? |1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
digits digit [ digits ]
alphanum alpha | digit
alphanums alphanum [ alphanums ]
void
See also: General description of this syntax, Escaping conventions. Tim BL
5.10
64 CHAPTER 5. W3 NAMING SCHEMES
5.11
5.12. W3 ADDRESSES FOR WAIS SERVERS 65
address. If the port number is omitted the preceding
colon must also be omitted. In this case,
port number
5.12 W3 addresses for WAIS servers
Servers using the WAIS ( "Wide Area Information Systems" ) protocols
from Thinking Machines may be accessed as part of the web using addresses
of the form (see BNF description )
w a i s : / / hostport / database ...
Access (currently) goes through a gateway which stores the "source" files which contain the descriptions of WAIS servers. This address corresponds to the address of an index. To this may optionally be appended either a search string or a document identifier.
Note that changes have been proposed to WAIS document id format, so this representation of them may have to change with that format. Currently the WAIS document address necessary for retrieval by a client requires the following information, which is orginally provided by the server in the hit list.
Document format This is normally "TEXT" but other formats
such as PS, GIF, exist.
Document length This is needed by the client who must loop to retrie the whole document in slices.
Document identifier This is an entity consisting of numerically tagged fields. the binary representation used by WAIS
66 CHAPTER 5. W3 NAMING SCHEMES
is transformed for readability into a sequence
of fields each consisting of a decimal tag, an
equals sign (=) , the field value, and a semicolon.
Within the field value, hex escaping is
used for otherwise illegal characters.
See also: Other W3 address formats , BNF definition .
Chapter 6
HTML
The WWW system uses marked-up text to represent a hypertext document for transmision over the network. The hypertext mark-up language is an SGML format. HTML parsers should ignore tags which they do not understand, and ignore attributes which they do not understand of tags which they do understand.
To find out how to write HTML, or to write a program to generate it, read:
The tags A list of the tags used in HTML with their significance. Example A file containing a variety of tags used for test purposes, and its source text .
You can use th line-mode browser to get the source text ( -source option ) of an html document you find on the network, so you can use any existing documents as examples.
6.1
68 CHAPTER 6. HTML
6.2.
6.2.1 .
6.2.2 Next ID
This tag takes a single attribute which is the number of the next documentwide>
6.2. HTML TAGS 69
6.2.3. NOT CURRENTLY USED
6.2.4.
70 CHAPTER 6. HTML
All attributes are optional, although one of NAME and HREF is necessary for the anchor to be useful.
6.2.5 .
6.2.6.
6.2.7:
ffl The text may contain any ISO Latin printable characters, including the tag opener, so long as it does not contain the closing tag in full.
6.2. HTML TAGS 71
ffl Line boundaries are significant, and are to be interpreted as a move to the start of a new line.
ffl.
6.2.8 Paragraph
This tag indicates a new paragraph. The exact representation of this (indentation, leading, etc) is not defined here, and may be a function of other tags, style sheets etc. The format is simply
<P>
(In SGML terms, paragraph elements are transmitted in minimised form).
6.2.92>, <H3>, <H4>, <H5>, <H5>, <H6>
These tags are kept as defined in the CERN SGML guide. Their definition is completely historical, deriving from the AAP tag set. A difference is that HTML documents allow headings to be terminated by closing tags:
<H3>Second level heading</h2>
72 CHAPTER 6. HTML
6.2.10 Address
This tag is for address information, signatures, etc, normally at the top or bottom of a document. typically, it is italic and/or right justified or indented. The format is:
<ADDRESS> text ... </ADDRESS>
6.2.11 Highlighting
The highlighted phrase tags may occur in normal text, and may be nested. For each opening tag there must follow a corresponding closing tag. NOT CURRENTLY USED.
<HP1>...</HP1> <HP2>... </HP2> etc.
6.2.12>
6.2.13 Lists
A list is a sequence of paragraphs, each of which is preceded by a special mark or sequence number. The format is:
<UL>
<LI> list element
<LI> another list element ...
</UL>
The opening list tag must be immediately followed by the first list element.
The representation of the list is not defined here, but a bulleted list for
unordered lists, and a sequence of numbered paragraphs for an ordered
list would be quite appropriate. Other possibilities for interactive display
include embedded scrollable browse panels.
Opening list tags are:
6.3. SGML 73
UL A list multi-line paragraphs, typically separated
by some white space.
MENU A list of smaller paragraphs. Typically one line per item, with a style more compact than UL.
DIR A list of short elements, less than one line. Typical style is to arrange in four columns or provide
a browser, etc.
6.3 SGML
The "Standard Generalised Mark-up Language" is an ISO standardised derivative of an earlier IBM "GML". It allows the structure of a document to be defined, and the logical relationship of its parts. This structure can be checked for validity against a "Document Type Definition", or DTD. The SGML standard defines the syntax for the document, and the syntax and semantics of the DTD. See books { Eric van Herwijnen's "Practical SGML" and Charles Goldfarb's "SGML Handbook". Some of the points generally broght up in (frequent) discussions of SGML follow.
6.3.1 High level markup
An SGML document is marked up in a way which says nothing about the representation of the document on paper or a screen. A presentation program must marge the document with style information in order to produce a printed copy. This is invaluable when it comes to interchange of documents between different systems, providing different views of a document, extracting information about it, and for machine processing in general. However, some authors feel that the act of communication includes the entire design of the document, and if this is done correctly the formatting is an essential part of authoring. They resist any attempts to change the representation used for their documents.
6.3.2 Syntax
The SGML syntax is sufficient for its needs, but few would say that it is particularly beautiful. The language shows its origins in systems where text was the principle content and markup was the exception, so a document which contains a lot of SGML is clumsy. There is always, of course, an element of personal taste to syntax.
74 CHAPTER 6. HTML
6.3.3 Tools
For many years, SGML was generated by hand, by people editing the
source. This has lead to a hatred of SGML among those who prefer their
own mark-up language which may have been quicker, more powerful, or
more familiar. The advent of WYSIWYG editors and solid SGML applications
should improve that facet of SGML.
See also: HyTime , HTML , Hypertext Document formats .
Tim BL
6.3.4 AAP
AAP stands for the American Asociation of Publishers, one of the first groups to fix on a common SGML DTD.
Chapter 7
Coding Style Guide
This document describes a coding style for C code (and therefore largely for C++ and Objective-C code). The style is used by the W3 project used so that:-
ffl Code is portable and maintainable.
ffl Code is easily readable by other project members.
If you have suggestions, do send them. (We do not include points designed to allow automatic processing of code by parsers with an incomplete awareness of C syntax.)
The style guide is divided into sections on Language features , Macros
, Module header , Function header , Code style , Identifiers , Include files ,
Directory structure .
(See also pointers to some public domain styles ).
Tim BL
7.1 Language features
Code to be common shared code must (unfortunately!) be written in C, rather than any objective C or C++, to ensure maximum portability. This section does not apply to code written for specific platforms.
C code must compile under either a conforming ANSI C compiler OR an original Kernighan & Ritchie C compiler. Therefore, the STDC macro must be used to select alternative code where necessary.. ( example ) Code should compile without warnings under an ANSI C compiler such as gcc with all warnings enabled.
76 CHAPTER 7. CODING STYLE GUIDE
Parameters and Arguments The PARAMS(()) macro is used to give a
format parameter list in a declataion so that it
will be suppressed if the compiled is not standard
C - see example .. The ARGS1 macro is
for the declaration of the implementation, taking
first the type then the argument name. For
n arguments, macros ARGn exists taking 2n
arguments each.
#endif Do put the ending condidtion in a comment. Don't put it as code - it won't pass all compilers.
const This keyword does not exist in K&R C, s use the macro CONST which expands to "const"
under standard C and nothing otherwise. { See
HTUtils.h
(part of: style guide )
7.2 Module Header
The module header is the comment at the top of a .h or .c file. Information need not (except for the title) be repeated in both the .c and .h files. Of course History sections are separate. See a dummy example . Note:-
ffl
Heading To make it easy to spot the file in a long listing,
put a header and te file name in the top righthand
corner.
Authors Just a list to make the initials intelligible. Use initials in the history or in comments in the file.
History A list of major changes of the file. You do not need to repeat information carried by a code
management system or in an accompanying hypertext
file.
Section headings Sections in the file such as public data, private module-wide data, etc should be made visible.
Two blank lines and a heading are useful for
this.
Tim BL
/* Foo Bar Module foobar.c
7.3. FUNCTION HEADINGS 77
** ==============
**
**) **
** CERN copyright -- See Copyright.html
*/
/* Global Data
** -----------
*/
Tim BL
7.3 Function Headings
This style concerns the comments, and so is not essential to compilation. However, it helps readability of code written by a number of people. Some of these conventions may be arbitrary, but are none the less useful for that.
7.3.1 Format
See a sample procedure heading . Note:-
ffl White space of two lines separating functions.
ffl The name of the function right-justified to make it easy to find when flicking through a listing
ffl The separate definitions for standard and old C.
ffl The macros PUBLIC and PRIVATE (in HTUtils.h ) expand to null and to "static" respectively. They show that one has thought about whether visibility is required outside the module, and they get over the overloading of the keyword "static" in C. Use one or the other. (Use for top level variables too).
78 CHAPTER 7. CODING STYLE GUIDE
7.3.2 Entry and exit condidtions
It is most important to document the function as seen by the rest of the world (especially the caller). The most important aspects of the appearance of the function to the caller are the pre- and post-condidtions.
The pre condidtions include the value of the parameters and structures they point to. Both include any requirements on or changes to global data, the screen, disk files, etc.
7.3.3 Function Heading: dummy example
} /* previous_function() */
/* Scan a line scan_line()
** -----------
** On entry,
** l points to the zero-terminated line to be scanned ** On exit,
** *l The line has null termintors inserted after each ** word found.
** return value is the number of words found, or -1 if error. ** lines This global value is incremented.
*/
PRIVATE int scan_line ARGS1(const char *, l);
{
/* Code here */
} /* scan_line() */
Tim BL
7.4 Function body layout
With the body of functions, this is the way we aim to do it...we;re not religious about it, but consistency helps.
7.5. IDENTIFIERS 79
7.4.1 Indentation
ffl Put opening f at the end of the same line as the if, while, etc which affects the block;
ffl Align the closing brace with the START of that opening line;
ffl Indent everything between f and g by an extra 4 spaces.
ffl Comment the closing braces of conditionals and other blocks with the type of block, including the correct sense of the condition of the block being closed if there was an "else", of the function name. For example,
if (cb[k]==0) { /* if black */
foo = bar;
} else { /* if white */
foo = foobar;
} /* if white */
} /* switch on character */
} /* loop on lines */
} /* scan_lines() */
Tim BL
7.5 Identifiers
When chosing identifier names,
ffl Macros should be un upper case entirely unless they mimic and replace a genuine function.
ffl External names should be prefixed with HT to avoid confusion with other projects' code. Wthin the rest of the identifier, we use initial capitals a la Objective-C (e.g. HTSendBuffer).
ffl The macro SHORT NAMES is defined on systems in which external names must be unique to within 8 characters (case insesitive). If your names would clash, at the top of the .h file for a module you should include macros defining distinct short names:
#ifdef SHORT_NAMES
#define HTSendBufferHeader HTSeBuHe
80 CHAPTER 7. CODING STYLE GUIDE
#define HTSendBuffer HTSeBuff
#endif
(back to <A NAME=1HREF=Coding.html>Overview</A>)<P>
7.6 Directory structure
This is an outline of the directory structure used to support multiple platforms.
ffl All code is under a subdirectory "Implementation" at the appropriate point in the tree.
ffl All object files are in a subdirectory Implementation/xxx where xxx is the machine name. See for example WWW/LineMode/Implementation/*.
ffl Makefiles in the system-specific directories incldue a CommonMakefile which is in the parent Implementation directory (..).
7.7 Include Files
7.7.1 Module include files
Every module in the project should have a C #incldue file defining its
interface, and a .c source file (of the same name apart from the suffix)
containing the implementation.
The .c file should #include itse own .h file.
A .h file should be protected so that errors do not result if it is #included twice.
An interface which relies on other intrefaces should #include those interface files. An implemention file which uses other modules should #include the .h file if it not already #included by its own .h file.
7.7.2 Common include files
These are all in the WWW/Implementation directory.
HTUtils.h Definitions of macros like PUBLIC and PRI- VATE and YES and NO. For use in all .c files.
7.7. INCLUDE FILES 81
tcp.h All machine-dependent code for accesing TCP/IP
channels and files. Also defines some machinedependent
bits like SHORT NAMES. Project-wide definition of constants, etc.
(See also: Style in general , directory structure ) Tim BL
|
http://www.nzdl.org/gsdlmod?e=d-00000-00---off-0cstr--00-0----0-10-0---0---0direct-10---4-------0-1l--11-en-50---20-about---00-0-1-00-0--4----0-0-11-10-0utfZz-8-00&cl=CL1.132&d=HASH0102b83a0da4ae5fa2bcc7cc.1>=2
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
For some time I could easily do without autogenerated migrations. Now I wanted them and I wanted to use Flask and not Django. I started, very naively, by installing and importing either flask-alembic and flask-migrate but they all seemed (at that time) to support patterns that I didn’t want (e.g. manager, single models.py) or couldn’t understand. At some points I didnt’t get migrations to work at all or they were empty or blueprints wouldn’t work or…
What I wanted was
* a folder “models” containing all models with a file for each model
* plain alembic
* a single start file with my setup and configs
After installing alembic via pip migrations didn’t work and even importing model in env.py didn’t solve it, fiddeling with target_metadata didn’t help as well as several other solutions outlined in StackOverflow. So here is what worked for me:
In my start/setup file (start.py in my case) has a function:
start_app():
app = Flask(__name__)
# config stuff
db.init_app(app)
return app
and
if __name__ == "__main__":
app.start_app()
app.run()
The app is started by just running python start.py without need of a manager.
I created a my shared_model that all model import:
from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy()
This makes it easier since all the models just import this shared model and I can also put some other stuff in here that I want to have access to in my models.
The last thing to do is editing the alembic env.py:
1. Import the start_app function and start the app
2. Import the db from the shared model and initialize it
3. Configure and set target_metadata
from start import start_app
app = start_app()
from models.shared_model import db
db.init_app(app)
config.set_main_option("sqlalchemy.url", app.config["SQLALCHEMY_DATABASE_URI"])
target_metadata = db.metadata
That’s about it models go into the models folder and can be used in blueprints, alembic revision –autogenerate produces more than “pass” and the app starts like usual.
|
https://kodekitchen.wordpress.com/2016/08/
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
#include <YesNoNone.h>
Detailed Description
Used for boolean enabled/disabled options with complex default logic. Allows Squid to compute the right default after configuration. Checks that not-yet-defined option values are not used. Allows for implicit default Yes/No values to be used by initialization without configure() being called, but not dumped as squid.conf content.
Use x.configure(bool) when the value is configured. Use x.defaultTo(bool) to assign defaults.
Definition at line 28 of file YesNoNone.h.
Member Enumeration Documentation
Definition at line 30 of file YesNoNone.h.
Constructor & Destructor Documentation
Definition at line 34 of file YesNoNone.h.
Definition at line 40 of file YesNoNone.h.
Member Function Documentation
Definition at line 53 of file YesNoNone.h.
References optConfigured, option, and setHow_.
Referenced by MemStoreRr::finalizeConfig(), parse_YesNoNone(), and testYesNoNone::testBasics().
whether the option was enabled or disabled, by squid.conf values resulting in explicit configure() usage.
Definition at line 67 of file YesNoNone.h.
References optConfigured, and setHow_.
Referenced by dump_YesNoNone(), MemStoreRr::finalizeConfig(), testYesNoNone::testBasics(), TransientsRr::useConfig(), and MemStoreRr::useConfig().
Definition at line 59 of file YesNoNone.h.
References Must, optConfigured, optImplicitly, option, and setHow_.
Referenced by testUfs::commonInit(), commonInit(), Security::ServerOptions::ServerOptions(), and testRock::setUp().
the boolean equivalent of the value stored. asserts if the value has not been set.
Definition at line 47 of file YesNoNone.h.
References Must, option, optUnspecified, and setHow_.
Member Data Documentation
Definition at line 71 of file YesNoNone.h.
Referenced by configure(), defaultTo(), and operator bool().
Definition at line 70 of file YesNoNone.h.
Referenced by configure(), configured(), defaultTo(), and operator bool().
The documentation for this class was generated from the following file:
- src/base/YesNoNone.h
|
http://www.squid-cache.org/Doc/code/classYesNoNone.html
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.