text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
|---|---|---|---|---|
[
]
Pinaki Poddar reassigned OPENJPA-393:
-------------------------------------
Assignee: Pinaki Poddar
> @Column(nullable=false) setting not taken into account for String field values
> ------------------------------------------------------------------------------
>
> Key: OPENJPA-393
> URL:
> Project: OpenJPA
> Issue Type: Bug
> Affects Versions: 1.0.0
> Environment: Linux 2.6, Java JDK 1.5.0.11, Spring 2.0.7
> Reporter: Gergely Kis
> Assignee: Pinaki Poddar
>
> The @Column(nullable=false) annotation is taken into account when creating the database
schema, however it is not taken into account when inserting string values.
> See the following test case:
> @Entity
> public class A {
> @Id
> private long id;
> @Column(nullable=false)
> private String name;
> public A() {}
> public A(String name) { this.name = name; }
> [...accessor methods omitted...]
> }
> When trying to persist the instance A(null), the record will be created successfully
with an empty string as the value of the name column, instead of returning an error.
> According to my analysis the problem is the following. When the @Column annotations are
parsed (see AnnotationPersistenceMappingParser) the FieldMapping.setNullValue() method is
not called. As a result, when fetching the String field value for storing it in the database
the default value for strings is returned (which is an empty string), instead of raising an
exception. See StringFieldStrategy.toDataStoreValue() for reference.
> The proposed solution would be to call this setNullValue method with the appropriate
parameter while @Column annotations are parsed, but I don't know the OpenJPA source well enough
to determine whether this is the proper fix or if there are other parameters that should be
set in the FieldMapping. However, my local tests fixed the reported issue.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
|
http://mail-archives.apache.org/mod_mbox/openjpa-dev/200807.mbox/%3C1123215339.1217357012024.JavaMail.jira@brutus%3E
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
You can subscribe to this list here.
Showing
1
results of 1
In 1.1a10 I have made a fairly significant change by adding a CallStack
to the path of execution through bsh scripts. The CallStack replaces the
simple namespace reference that most of the ASTs were using with a stack of
namespaces showing the chain of callers.
For normal users the only difference will be that they will (soon) start seeing
error messages that show a script "stack trace" to the point of failure. This
should make debugging complex scripts easier.
For developers writing BeanShell specific commands and tools the change is
more fundamental in that we now have two new powerful magic references:
this.caller - a reference to the calling This context of the method.
this.callstack - an array of NameSpaces representing the full callstack.
With this.caller it is finally possible to correctly write beanshell
commands / scripted methods that have side effects in the caller's namespace.
In particular the eval() and source() methods now work correctly with side
effects in local scope. It will also be possible to write other nifty
commands that examine the caller's context and do work... (Think of commands
which act as if they were "sourced" right into place).
Internally these changes were not terribly deep, but they did touch a *lot*
of code. This worries me a little with respect to potential new bugs, so I'd
really like to ask you all to try this release if possible.
As far as users outside of the package - I have tried to preserve all of the
existing APIs, so you shouldn't have to make changes. Although we may need to
add more hooks for preserving the call stack info where it is desired.
I'm shooting for wrapping up the current bug list and making the latest version
the beta in a couple of weeks. Then I'd like to wait a few weeks before making
that the final 1.1 release and moving on towards really new features for
bsh 2.0.
P.S.
As an aside, others might want to use the new callstack as a basis to
experiment with a dynamically bound scripting language vs. the statically
bound way in which BeanShell currently locates variables. With some changes
it would be possible to make scope follow the call chain as opposed to the
namespace parent hierarchy.
The code will be in CVS shortly. Any comments are welcome.
Thanks,
Pat Niemeyer
|
http://sourceforge.net/p/beanshell/mailman/beanshell-developers/?viewmonth=200105&viewday=30
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
Basic Security Practices for ASP.NET Web Applications
Even if you have limited experience with and knowledge of application security, there are basic measures that you should take to help protect your Web applications. The following sections in this topic provide minimum-security guidelines that apply to all Web applications. For more detailed information about best practices for writing secure code and securing applications, see the book "Writing Secure Code" by Michael Howard and David LeBlanc and the guidance provided by Microsoft Patterns and Practices.
Even the most elaborate application security can fail if a malicious user can use simple ways to gain access to your computers. General Web application security recommendations include the following:
Back up data often and keep your backups physically secure.
Keep your Web server physically secure so that unauthorized users cannot gain access to it, turn it off, physically steal it, and so on.
Use the Windows NTFS file system, not FAT32. NTFS offers substantially more security than FAT32. For details, see the Windows Help documentation.
Protect the Web server and all of the computers on the same network with strong passwords.
Follow best practices for securing Internet Information Services (IIS). For details, see the Windows Server TechCenter for IIS.
Close any unused ports and turn off unused services.
Run a virus checker that monitors site traffic.
Use a firewall. For recommendations, see Microsoft Firewall Guidelines on the Microsoft security Web site.
Learn about and install the latest security updates from Microsoft and other vendors.
Use Windows event logging and examine the logs frequently for suspicious activity. This includes repeated attempts to log on to your system and excessive requests against your Web server.
When your application runs, it runs within a context that has specific privileges on the local computer and potentially on remote computers. For information about configuring application identity, see Configuring ASP.NET Process Identity.
To run with the minimum number of privileges needed, follow these guidelines:
Do not run your application with the identity of a system user (administrator).
Run the application in the context of a user with the minimum practical privileges.
Set permissions (ACLs, or Access Control Lists) on all the resources required for your application. Use the most restrictive setting. For example, if practical in your application, set files to be read-only. For a list of the minimum.:
If your application is an intranet application, configure it to use Windows Integrated security. This way, the user's login credentials can be used to access resources.
If you need to gather credentials from the user, use one of the ASP.NET authentication strategies. For an example, see the ASP.NET Forms Authentication Overview.
As a general rule, never assume that input you get from users is safe. It is easy for malicious users to send potentially dangerous information from the client to your application. To help guard against malicious input, follow these guidelines:
In forms,.
Similarly, header (usually via the Request object) is safe. Use safeguards for query strings, cookies, and so on. Be aware that information that the browser reports to the server (user agent information) can be spoofed, in case that is important in your application.
If possible, do not store sensitive information in a place that is accessible from the browser, such as hidden fields or cookies. For example, do not store a password in a cookie.
Databases typically have their own security. An important aspect Web application security is designing a way for the application to access the database securely. Follow these guidelines:
Use the inherent security of your database to limit who can access database resources. The exact strategy depends on your database and your application:
If practical in your application, use Windows Integrated security so that only Windows-authenticated users can access the database. Integrated security is more secure than using SQL Server standard security.
If your application uses credential, store them securely. If practical, encrypt or hash them. For details, see Encrypting and Decrypting Data.
For more information about accessing data securely, see Securing ADO.NET Applications., check first that the user is local to the Web server. For details, see How to: Display Safe Error Messages.
Use the customErrorsconfiguration element to control who can view exceptions from the server.
Create custom error handling for situations that are prone to error, such as database access. Secure Sockets Layer (SSL). For details about how to encrypt a site with SSL, see article Q307267, "How to: Secure XML Web Services with Secure Sockets Layer in Windows 2000" in the Microsoft Kn. view (such as in server code).
Use the strong encryption algorithms supplied in the System.Security.Cryptography namespace.
Cookies are an easy store any sensitive information in a cookie that. Instead, keep a reference in the cookie to a location on the server where the information is located.
Set expiration dates on cookies to the shortest practical time you can. Avoid permanent cookies if possible.
Consider encrypting information in cookies.
Consider setting the Secure and HttpOnly properties on your cookies to true.
An indirect way that a malicious user can compromise your application is by making it unavailable. The malicious user can keep the application too busy to service other users, or if nothing else can simply crash the application. Follow these guidelines:
Close or release any resource you use. For example, always close data connections and data readers, and always close files when you are done using them.
Use error handling (for example, try/catch blocks). Include a finally block in which you release resources in case of failure.
Configure IIS to use throttling, which prevents an application from using a disproportionate amount of CPU.
Test size limits of user input before using or storing it.
Put size safeguards on database queries to help guard against large queries using up system resources.
Put a size limit on file uploads, if those are part of your application. You can set a limit in the Web.config file using syntax such as the following code example, where the maxRequestLength value is in kilobytes:
You can also use the RequestLengthDiskThreshold property in to reduce the memory overhead of large uploads and form posts.
|
http://msdn.microsoft.com/en-us/library/t4ahd590
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
CA2105: Array fields should not be read only
When you apply the readonly (ReadOnly in Visual Basic) modifier to a field that contains an array, the field cannot be changed to refer to a different array. However, the elements of the array that are stored in a read-only field can be changed. Code that makes decisions or performs operations that are based on the elements of a read-only array that can be publicly accessed might contain an exploitable security vulnerability.
Note that having a public field also violates the design rule CA1051: Do not declare visible instance fields.
To fix the security vulnerability that is identified by this rule, do not rely on the contents of a read-only array that can be publicly accessed. It is strongly recommended that you use one of the following procedures:
Replace the array with a strongly typed collection that cannot be changed. For more information, see System.Collections.ReadOnlyCollectionBase.
Replace the public field with a method that returns a clone of a private array. Because your code does not rely on the clone, there is no danger if the elements are modified.
If you chose the second approach, do not replace the field with a property; properties that return arrays adversely affect performance. For more information, see CA1819: Properties should not return arrays.
This example demonstrates the dangers of violating this rule. The first part shows an example library that has a type, MyClassWithReadOnlyArrayField, that contains two fields (grades and privateGrades) that are not secure. The field grades is public, and therefore vulnerable to any caller. The field privateGrades is private but is still vulnerable because it is returned to callers by the GetPrivateGrades method. The securePrivateGrades field is exposed in a safe manner by the GetSecurePrivateGrades method. It is declared as private to follow good design practices. The second part shows code that changes values stored in the grades and privateGrades members.
The example class library appears in the following example.
using System; namespace SecurityRulesLibrary { public class MyClassWithReadOnlyArrayField { public readonly int[] grades = {90, 90, 90}; private readonly int[] privateGrades = {90, 90, 90}; private readonly int[] securePrivateGrades = {90, 90, 90}; // Making the array private does not protect it because it is passed to others. public int[] GetPrivateGrades() { return privateGrades; } //This method secures the array by cloning it. public int[] GetSecurePrivateGrades() { return (int[])securePrivateGrades.Clone(); } public override string ToString() { return String.Format("Grades: {0}, {1}, {2} Private Grades: {3}, {4}, {5} Secure Grades, {6}, {7}, {8}", grades[0], grades[1], grades[2], privateGrades[0], privateGrades[1], privateGrades[2], securePrivateGrades[0], securePrivateGrades[1], securePrivateGrades[2]); } } }
The following code uses the example class library to illustrate read-only array security issues.
using System; using SecurityRulesLibrary; namespace TestSecRulesLibrary { public class TestArrayReadOnlyRule { [STAThread] public static void Main() { MyClassWithReadOnlyArrayField dataHolder = new MyClassWithReadOnlyArrayField(); // Get references to the library's readonly arrays. int[] theGrades = dataHolder.grades; int[] thePrivateGrades = dataHolder.GetPrivateGrades(); int[] theSecureGrades = dataHolder.GetSecurePrivateGrades(); Console.WriteLine( "Before tampering: {0}", dataHolder.ToString()); // Overwrite the contents of the "readonly" array. theGrades[1]= 555; thePrivateGrades[1]= 555; theSecureGrades[1]= 555; Console.WriteLine( "After tampering: {0}",dataHolder.ToString()); } } }
The output from this example is:
Before tampering: Grades: 90, 90, 90 Private Grades: 90, 90, 90 Secure Grades, 90, 90, 90 After tampering: Grades: 90, 555, 90 Private Grades: 90, 555, 90 Secure Grades, 90, 90, 90
|
http://msdn.microsoft.com/en-us/library/ms182299(v=vs.110)
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
.
The following example shows how to define a dynamic method, and then create a delegate with an object that is bound to the first parameter of the method.
The example defines a class named Base with a field named Number, and a class named DerivedFromBase that derives from the first class. The example defines two delegate types: one named UseLikeStatic that returns Int32 and has parameters of type Base and Int32, and another named UseLikeInstance that returns Int32 and has one parameter of type Int32.
The example then creates a DynamicMethod that changes the Number field of an instance of Base and returns the previous value.
The example creates an instance of Base and then creates two delegates. The first delegate is of type UseLikeStatic, which has the same parameters as the dynamic method. The second delegate is of type UseLikeInstance, which lacks the first parameter (of type Base). This delegate is created by using the CreateDelegate(Type, Object) method overload; the second parameter of that method overload is an instance of Base, which is bound to the newly created delegate. Whenever that delegate is invoked, the dynamic method acts on the bound instance of Base.
The UseLikeStatic delegate is invoked, passing in the same instance of Base that is bound to the UseLikeInstance delegate. Then the UseLikeInstance delegate is invoked, so that both delegates act on the same instance of Base. The changes in the values of the Number field are displayed after each call. Finally, a UseLikeInstance delegate is bound to an instance of DerivedFromBase, and the delegate calls are repeated.
using System; using System.Reflection; using System.Reflection.Emit; // These classes are for demonstration purposes. // public class Base { public int Number = 0; public Base(int initialNumber) { this.Number = initialNumber; } } public class DerivedFromBase : Base { public DerivedFromBase(int initialNumber) : base(initialNumber) { } } // Two delegates are declared: UseLikeInstance treats the dynamic // method as if it were an instance method, and UseLikeStatic // treats the dynamic method in the ordinary fashion. // public delegate int UseLikeInstance(int newNumber); public delegate int UseLikeStatic(Base ba, int newNumber); public class Example { public static void Demo(System.Windows.Controls.TextBlock outputBlock) { // This dynamic method sets the public Number field of an instance // of Base, and returns the previous value. The method has no name. // It takes two parameters, an instance of Base and an Integer that // is the new field value. // DynamicMethod changeNumber = new DynamicMethod( "", typeof(int), new Type[] { typeof(Base), typeof(int) } ); // Get a FieldInfo for the public field 'Number'. FieldInfo fnum = typeof(Base).GetField( "Number", BindingFlags.Public | BindingFlags.Instance ); ILGenerator ilg = changeNumber.GetILGenerator(); // Push the current value of the Number field onto the // evaluation stack. It's an instance field, so load the // instance of Base before accessing the field. ilg.Emit(OpCodes.Ldarg_0); ilg.Emit(OpCodes.Ldfld, fnum); // Load the instance of Base again, load the new value // of Number, and store the new field value. ilg.Emit(OpCodes.Ldarg_0); ilg.Emit(OpCodes.Ldarg_1); ilg.Emit(OpCodes.Stfld, fnum); // The original value of the Number field is now the only // thing on the stack, so return from the call. ilg.Emit(OpCodes.Ret); // Create a delegate that uses changeNumber in the ordinary // way, as a static method that takes an instance of // Base and an int. // UseLikeStatic uls = (UseLikeStatic)changeNumber.CreateDelegate( typeof(UseLikeStatic) ); // Create an instance of Base with a Number of 42. // Base ba = new Base(42); // Create a delegate that is bound to the instance of // of Base. This is possible because the first // parameter of changeNumber is of type Base. The // delegate has all the parameters of changeNumber except // the first. UseLikeInstance uli = (UseLikeInstance)changeNumber.CreateDelegate( typeof(UseLikeInstance), ba ); // First, change the value of Number by calling changeNumber as // a static method, passing in the instance of Base. // outputBlock.Text += String.Format( "Change the value of Number; previous value: {0}", uls(ba, 1492) ) + "\n"; // Change the value of Number again using the delegate bound // to the instance of Base. // outputBlock.Text += String.Format( "Change the value of Number; previous value: {0}", uli(2700) ) + "\n"; outputBlock.Text += String.Format("Final value of Number: {0}", ba.Number) + "\n"; // Now repeat the process with a class that derives // from Base. // DerivedFromBase dfba = new DerivedFromBase(71); uli = (UseLikeInstance)changeNumber.CreateDelegate( typeof(UseLikeInstance), dfba ); outputBlock.Text += String.Format( "Change the value of Number; previous value: {0}", uls(dfba, 73) ) + "\n"; outputBlock.Text += String.Format( "Change the value of Number; previous value: {0}", uli(79) ) + "\n"; outputBlock.Text += String.Format("Final value of Number: {0}", dfba.Number) + "\n"; } } /* This example produces the following output: Change the value of Number; previous value: 42 Change the value of Number; previous value: 1492 Final value of Number: 2700 Change the value of Number; previous value: 71 Change the value of Number; previous value: 73 Final value of Number: 79 */
For a list of the operating systems and browsers that are supported by Silverlight, see Supported Operating Systems and Browsers.
|
http://msdn.microsoft.com/en-us/library/z43fsh67(v=vs.95).aspx
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
Laura Stewart wrote:
> On 10/17/07, Rick Hillegas <Richard.Hillegas@sun.com> wrote:
>
>> Recently, the DB PMC discussed whether we should make Derby a top level
>> Apache project. The consensus seemed to be that a) Derby is mature
>> enough to be a top level project, b) no-one felt passionately one way or
>> another, c) therefore this decision should be left to the Derby
>> community. It seemed to me that the major argument for promoting Derby
>> to be a top level project was
>>
>> + This would increase Derby's visibility.
>>
>> The major argument against promoting Derby to the top level was
>>
>> - This would force us to rototill our website.
>>
>> What do you think? Should we move Derby to the top level?
>>
>>
>
> What sort of "rototilling" are we talking about for the Web site Rick?
> Are there guidelines that will help us comply with the changes that
> are necessary?
>
>
Hi Laura,
I haven't wrapped my head around this myself, yet. At a minimum I would
expect some urls to change since we would be going from a
db.apache.org/derby namespace to a derby.apache.org namespace. For the
sake of external links back to Derby I would hope that the Apache site
would be able to redirect the old urls--but I don't know that. I think
that a detailed understanding of this could be provided by the folks who
managed our graduation from the incubator namespace into the db namespace.
Regards,
-Rick
|
http://mail-archives.apache.org/mod_mbox/db-derby-dev/200710.mbox/%3C4716944C.7040502@sun.com%3E
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
Hi,
On Sun, Jul 13, 2008 at 1:06 AM, Matej Knopp <matej.knopp@gmail.com> wrote:
> Can anyone explain me why the event observation listeners are removed
> in session.logout()?
Because they were registered within the scope of that session, and are
tied to the access credentials and namespace mappings associated with
that session.
BR,
Jukka Zitting
|
http://mail-archives.apache.org/mod_mbox/jackrabbit-users/200807.mbox/%3C510143ac0807121529k715a1b44rdb22531cfa7cfd16@mail.gmail.com%3E
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
{-# LANGUAGE Trustworthy #-} {-# LANGUAGE CPP #-} {-# LANGUAGE DeriveDataTypeable, StandaloneDeriving #-} ------------------------------------------------------------------------------- -- | -- Module : System.Timeout -- Copyright : (c) The University of Glasgow 2007 -- License : BSD-style (see the file libraries/base/LICENSE) -- -- Maintainer : libraries@haskell.org -- Stability : experimental -- Portability : non-portable -- -- Attach a timeout event to arbitrary 'IO' computations. -- ------------------------------------------------------------------------------- module System.Timeout ( timeout ) where #ifndef mingw32_HOST_OS import Control.Monad import GHC.Event (getSystemTimerManager, registerTimeout, unregisterTimeout) #endif import Control.Concurrent import Control.Exception (Exception(..), handleJust, bracket, uninterruptibleMask_, asyncExceptionToException, asyncExceptionFromException) import Data.Typeable import Data.Unique (Unique, newUnique) -- An internal type that is thrown as a dynamic exception to -- interrupt the running IO computation when the timeout has -- expired. newtype Timeout = Timeout Unique deriving (Eq, Typeable) instance Show Timeout where show _ = "<<timeout>>" -- Timeout is a child of SomeAsyncException instance Exception Timeout where toException = asyncExceptionToException fromException = asyncExceptionFromException -- ) timeout n f | n < 0 = fmap Just f | n == 0 = return Nothing #ifndef mingw32_HOST_OS | rtsSupportsBoundThreads = do -- In the threaded RTS, we use the Timer Manager to delay the -- (fairly expensive) 'forkIO' call until the timeout has expired. -- -- An additional thread is required for the actual delivery of -- the Timeout exception because killThread (or another throwTo) -- is the only way to reliably interrupt a throwTo in flight. pid <- myThreadId ex <- fmap Timeout newUnique tm <- getSystemTimerManager -- 'lock' synchronizes the timeout handler and the main thread: -- * the main thread can disable the handler by writing to 'lock'; -- * the handler communicates the spawned thread's id through 'lock'. -- These two cases are mutually exclusive. lock <- newEmptyMVar let handleTimeout = do v <- isEmptyMVar lock when v $ void $ forkIOWithUnmask $ \unmask -> unmask $ do v2 <- tryPutMVar lock =<< myThreadId when v2 $ throwTo pid ex cleanupTimeout key = uninterruptibleMask_ $ do v <- tryPutMVar lock undefined if v then unregisterTimeout tm key else takeMVar lock >>= killThread handleJust (\e -> if e == ex then Just () else Nothing) (\_ -> return Nothing) (bracket (registerTimeout tm n handleTimeout) cleanupTimeout (\_ -> fmap Just f)) #endif | otherwise = do pid <- myThreadId ex <- fmap Timeout newUnique handleJust (\e -> if e == ex then Just () else Nothing) (\_ -> return Nothing) (bracket (forkIOWithUnmask $ \unmask -> unmask $ threadDelay n >> throwTo pid ex) (uninterruptibleMask_ . killThread) (\_ -> fmap Just f)) -- #7719 explains why we need uninterruptibleMask_ above.
|
http://www.haskell.org/ghc/docs/latest/html/libraries/base/src/System-Timeout.html
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
26 June 2012 21:47 [Source: ICIS news]
WASHINGTON (ICIS)--The ?xml:namespace>, ACC chief economist Kevin Swift.
“The chemical industry has been found to consistently lead the
The council noted that chemicals production is one of the largest
“The CAB provides a long lead for business cycle peaks and troughs and can help identify emerging trends in the wider
“After a relatively strong start to 2012, the CAB is signalling a slowing of the
Retroactive to the beginning of this year, the monthly CAB showed US economic gains of 0.6% in January, 0.2% in February and 0.6% in March.
But beginning with a 0.6% decline in April, the CAB shows economic cooling of 0.7% in May and the sharper 1.3% drop in June.
The CAB indicator of a cooling
The CAB has been calculated for the overall
(
|
http://www.icis.com/Articles/2012/06/26/9572859/new-chemicals-sector-indicator-sees-us-recovery-slowing-further.html
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
connect Database...; mysql
;
This query creates database 'usermaster' in
Mysql.
Connect JSP with mysql :
Now in the following jsp code, you will see
how to connect... Connect JSP with mysql
JSP Simple Examples
in JSP
An exception can occur if you trying to connect to a database... in JSP
The while loop is a control flow statement, which allows code...:
out> For Simple Calculation and Output
In this example we have used
paging in Jsp: jsp code - JSP-Servlet
friend,
pagination using jsp with database
<...];
System.out.println("MySQL Connect Example.");
Connection con...;
}
System.out.println("MySQL Connect Example.");
Connection conn = null;
String
how to connect jsp with sql database by netbeans in a login page?
how to connect jsp with sql database by netbeans in a login page? how to connect jsp with sql database by netbeans in a login page
JSP code
JSP code I get an error when i execute the following code :
<... the following code its getting connected to database
import java.sql.Connection... app only i am unable to connect the database.
I am using Eclipse>
jsp function - JSP-Servlet
a simple example of JSP Functions
Method in JSP
See the given simple button Example to submit...://
JSP
://
<jsp:useBean id="user.../jsp/simple-jsp-example/UseBean.shtml...how can we use beans in jsp how can we use beans in jsp
problem connect jsp and mysql - JSP-Servlet
problem connect jsp and mysql hello,
im getting an error while connecting jsp and mysql.
I have downloaded the driver mysql-connector...;
This is my code
The error i get is as follows
code - JSP-Servlet
code how can i connect SQl database in javascript. and how to execute the query also i think u can not connect to SQL database in javascript.
thanks
sandeep
How to connect mysql with jsp
How to connect mysql with jsp how to connect jsp with mysql while using apache tomcat
code - JSP-Servlet
know how can i call database connection in javascript.
please help me use some server side script like servlet or jsp, call these when user click in check box.in that u can connect to database.javascript is run on client machine
jsp code - JSP-Servlet
jsp code sample code for change password
example
Old Password:
new Password:
jsp
jsp how to connect the database with jsp using mysql
Hi Friend,
Please visit the following links:
Navigation in a database table through jsp
.
Create a database:
Before run this jsp code first create a database named...
Navigation in a database table through jsp
This is detailed jsp code that shows how
Connecting to MySQL database and retrieving and displaying data in JSP
page
Connecting to MySQL database
and retrieving and displaying data in JSP page...;
This tutorial shows you how to connect to MySQL database and retrieve the
data from the database. In this example we will use tomcat version 4.0.3 to run
our
jsp code - JSP-Servlet
jsp code sample code to create hyperlink within hyperlink
example:
reservation:
train:
A/C department
non A/c Department
jsp code - JSP-Servlet
jsp code hello frns
i want to display image from the database along... from database in Jsp to visit....
Thanks
jsp - JSP-Servlet
jsp i want to code in jsp servlet for login page containing username password submit and then change password.in this code how to maintain session...("password");
System.out.println("MySQL Connect Example
simple bank application - JSP-Servlet
simple bank application hi i got ur codings...But if we register a new user it is not updating in the database...so plz snd me the database also....
Thank you
code for insert the value from jsp to access database
code for insert the value from jsp to access database code for insert the value from jsp to access database
Jsp - JSP-Servlet
JSP date picker code I am digging for either a simple example or code to get the Date format in JSP
Insert Image into Mysql Database through Simple Java Code
Insert Image into Mysql Database through
Simple Java Code... simple java code that how save image
into mysql database. Before running this java code you need to create data base and
table to save image in same database
jsp code - Java Beginners
JSP code and Example JSP Code Example
the following links:
JSP
://
EL parser...Can you explain jsp page life cycle what is el how does el search
JSP Simple Examples
JSP Simple Examples
Index 1.
Creating....
Try catch in jsp
In try block we write those code which can throw... page.
Html tags in jsp
In this example
jsp code - JSP-Servlet
;
For the above code, we have created following database tables...jsp code hi
my requirement is generate dynamic drop down lists... statement?
pls provide code how to get first selected drop down list value i want to add below code data in mysql database using jsp... using below code we got data in text box i want to add multiple data in database...
Add/Remove dynamic rows in HTML table
Retrieve image from mysql database through jsp
to retrieve image from
mysql database through jsp code. First create a database....
mysql> create
database mahendra;
Note : In the jsp code given below, image...
Retrieve image from mysql database through
unable to connect database in java
unable to connect database in java Hello Everyone! i was trying to connect database with my application by using java but i am unable to connect...
i was using this code....
try
{
Driver d=(Driver)Class.forName
code to establish jdbc database connectivity in jsp
code to establish jdbc database connectivity in jsp Dear sir,
i'm in need of code and procedure to establish jdbc connectivity in jsp
Simple problem to solve - JSP-Servlet
Simple problem to solve Respected Sir/Madam,
I am R.Ragavendran.. Thanks for your kind and timely help for the program I have asked... the code for ur kind refernce..
Here it is:
EMPLOYEE</b>
<%
out.println("Unable to connect to database.");
}
%>...("Unable to connect to database.");
}
%>
</font>
</body>
</html>...jsp Hi
How can we display sqlException in a jsp page?
How can we
simple code for XML database in JS
simple code for XML database in JS Sir ,
i want a code in javascript for XML database to store details (username and password entered by user during registration process (login process)).
please send me a code .
Thank you
Simple JDBC Example
;
}
Simple JDBC Example
To connect java application to the database we do...,'John',B.Tech') ;
A Simple example is given below to connect a java...("MySQL Connect Example.");
Connection conn = null;
Statement stmt = null
JSP - JSP-Interview Questions
://
Thanks...
This is simple code.
A Comment Test
A Test of Comments... are the comments in JSP(java server pages)and how many types and what are they.Thanks inadvance code for forget password
JSP code for forget password I need forget password JSP code..
example for storing login and logout time to an account
jsp code for storing login and logout time to an account I need simple jsp code for extracting and storing login and logout time in a database table...://
in JSP Code |
Connect JSP
with mysql |
Create a Table in
Mysql database... Windows Media Player |
Connect JSP with
mysql | Connect from
database using... posted to a JSP file from HTML file |
Accessing
database from JSP |
Implement
Display Data from Database in JSP
Display Data from Database
in JSP
This is detailed java program to connect java
application with mysql database... jsp page.
welcome_to_database_query.jsp
<!DOCTYPE HTML PUBLIC "
JSP Search Example code
JSP Search - Search Book Example
... the data from database. In this example we will search
the book from database.
We are using MySQL database and JSP & Servlet to create the
application
jsp
jsp can u send the code to store search keywords from google server engine to my database
|
http://www.roseindia.net/tutorialhelp/comment/81442
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
Skip navigation links
java.lang.Object
java.security.Permission
oracle.adf.share.security.authorization.ADFPermission
oracle.webcenter.peopleconnections.connections.security.ConnectionsPermission
public class ConnectionsPermission
Permission checked for in all Connections-related operations.
public static final java.lang.String DELIMITER
public static final java.lang.String MANAGE_ACTION
public static final java.lang.String EDIT_ACTION
public static final java.lang.String VIEW_ACTION
public static final java.lang.String AUTH_USER_ACTION
public ConnectionsPermission(java.lang.String name, java.lang.String actions)
This constructor is recognized and called by the security framework.
name- name of the permission object being created
actions- comma-separated list of actions of the permission object being created
public ConnectionsPermission(java.lang.String actions)
Note that this constructor is being provided only as a convenience method for not having to pass a specific name while constructing the object and can be used while constructing permissions for code checks. It constructor is not expected to be called by the framework.
actions- Comma-separated list of actions of the permission object being created
public boolean implies(java.security.Permission permission)
impliesin class
oracle.adf.share.security.authorization.ADFPermission
protected boolean isPatternMatch(java.lang.String sMatch)
isPatternMatchin class
oracle.adf.share.security.authorization.ADFPermission
public static oracle.adf.share.security.authorization.PermissionActionDescriptor[] getPermissionActionDescriptors()
Skip navigation links
|
http://docs.oracle.com/cd/E28280_01/apirefs.1111/e15995/oracle/webcenter/peopleconnections/connections/security/ConnectionsPermission.html
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
Python bindings for Clojure
Available items
The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:
JNA libpython bindings to the tech ecosystem.
We aim to integrate Python into Clojure at a deep level. This means that we want to be able to load/use python modules almost as if they were Clojure namespaces. We also want to be able to use Clojure to extend Python objects. I gave a talk at Clojure Conj 2019 that outlines more of what is going on.
This code is a concrete example that generates an embedding for faces:
(ns facial-rec.face-feature (:require [libpython-clj.require :refer [require-python]] [libpython-clj.python :refer [py. py.. py.-] :as py] [tech.v2.datatype :as dtype]))
(require-python 'mxnet '(mxnet ndarray module io model)) (require-python 'cv2) (require-python '[numpy :as np])
(defn load-model [& {:keys [model-path checkpoint] :or {model-path "models/recognition/model" checkpoint 0}}] (let [[sym arg-params aux-params] (mxnet.model/load_checkpoint model-path checkpoint) all-layers (py. sym get_internals) target-layer (py/get-item all-layers "fc1_output") model (mxnet.module/Module :symbol target-layer :context (mxnet/cpu) :label_names nil)] (py. model bind :data_shapes [["data" [1 3 112 112]]]) (py. model set_params arg-params aux-params) model))
(defonce model (load-model))
(defn face->feature [img-path] (py/with-gil-stack-rc-context (if-let [new-img (cv2/imread img-path)] (let [new-img (cv2/cvtColor new-img cv2/COLOR_BGR2RGB) new-img (np/transpose new-img [2 0 1]) input-blob (np/expand_dims new-img :axis 0) data (mxnet.ndarray/array input-blob) batch (mxnet.io/DataBatch :data [data])] (py. model forward batch :is_train false) (-> (py. model get_outputs) first (py. asnumpy) (#(dtype/make-container :java-array :float32 %)))) (throw (Exception. (format "Failed to load img: %s" img-path))))))
(ns my-py-clj.config (:require [libpython-clj.python :as py]))
;; When you use conda, it should look like this. (py/initialize! :python-executable "/opt/anaconda3/envs/my_env/bin/python3.7" :library-path "/opt/anaconda3/envs/my_env/lib/libpython3.7m.dylib")
{... ;; This namespace going to run when the REPL is up. :repl-options {:init-ns my-py-clj.config} ...}
user> (require '[libpython-clj.require :refer [require-python]]) ...logging info.... nil user> (require-python '[numpy :as np]) nil user> (def test-ary (np/array [[1 2][3 4]])) #'user/test-ary user> test-ary [[1 2] [3 4]]
We have a document on all the features but beginning usage is pretty simple. Import your modules, use the things from Clojure. We have put effort into making sure things like sequences and ranges transfer between the two languages.
One very complimentary aspect of Python with respect to Clojure is it's integration with cutting edge native libraries. Our support isn't perfect so some understanding of the mechanism is important to diagnose errors and issues.
Current, we launch the python3 executable and print out various different bits of configuration as json. We parse the json and use the output to attempt to find the
libpython3.Xm.soshared library so for example if we are loading python 3.6 we look for
libpython3.6m.soon Linux or
libpython3.6m.dylibon the Mac.
This pathway has allowed us support Conda albeit with some work. For examples using Conda, check out the facial rec repository above or look into how we build our test docker containers.
New to Clojure or the JVM? Try remixing the nextjournal entry and playing around there. For more resources on learning and getting more comfortable with Clojure, we have an introductory document.
To install jar to local .m2 :
$ lein install
$ lein deploy clojars
This command will sign jar before deploy, using your gpg key. (see dev/src/build.clj for signing options)
This program and the accompanying materials are made available under the terms of the Eclipse Public License 2.0 which is available at.
|
https://xscode.com/clj-python/libpython-clj
|
CC-MAIN-2020-45
|
en
|
refinedweb
|
When I run >>>arcpy.Near_analysis("ACN_Select","Derden","150 Meters", "NO_LOCATION", "NO_ANGLE","PLANAR") in the python window in ArcMap 10.4, it returns the correct numbers in the field NEAR_DIST.
However, if I run the same in PyScripter:
import arcpy
arcpy.env.workspace = "C:\\Geoplay\\GLO\\Output.mdb"
arcpy.env.overwriteOutput = True
if arcpy.CheckExtension("Spatial") == "Available":
arcpy.AddMessage("Checking out Spatial")
arcpy.CheckOutExtension("Spatial")
else:
arcpy.AddError("Unable to get spatial analyst extension")
arcpy.AddMessage(arcpy.GetMessages(0))
sys.exit(0)
ACN_Select = "C:\\Geoplay\\GLO\\GLO.gdb\\ACN_Select"
Derden = "C:\\Geoplay\\GLO\\GLO.gdb\\Derden"
#Near Analysis
arcpy.Near_analysis(ACN_Select,Derden,"150 Meters","NO_LOCATION","NO_ANGLE","PLANAR")
print "Ready!"
All the NEAR_FID and NEAR_DIST are -1.
I tried creating layers and using those, but it didn't work. Also I tried setting the spatial ref with
spatRef = arcpy.Describe(ACN_in).spatialReference
but it did not change anything.
What am I missing?
the point closest to feature will not necessarily remain the same... if one is in decimal degrees and the other is in meters since the planar distance between two locations in decimal degrees is not constant but decrease as you head poleward.
At this stage, perhaps one of the files has been 'defined' wrong. The only way that you can confirm this is to examine the layer extents. I confirm visually by inserting a new data frame then add one file to it and examine its properties through the properties dialog and visually on the map. delete that data frame and repeat with a new dataframe and layer. If the extent indicates that the layer is in decimal degrees (-180 to 180 and -90 to 90 for X and Y ranges at most) but the file has been defined wrong, then you have to correct the definition.
On the other side, what if one file's coordinates were in meters and the other in feet? The numbers look like they are projected, but their values will be scaled by the conversion from feet to meters. How to resolve? same process and see where it lands up on a map.
Subtle differences in projection can cause issues. In my area, UTM and MTM are commonly used and it is not uncommon for them to be defined wrong or read wrong... To solve? same process and examine where they fall on the earth relative to where they should be.. Oh yes, there are imperial and metric versions for both. A simple drop of a few bits of information when defining the coordinate system makes a huge difference.
In short, just because the definition of a file says it IS something, it doesn't mean that IT IS what it purports to be.
|
https://community.esri.com/thread/198446-arcpy-near-analysis-fails
|
CC-MAIN-2020-45
|
en
|
refinedweb
|
CONSIDERATIONS Volume XX Number 3
August–October 2005
CONTENTS The Next China-Taiwan Crisis, 2009-2012 Bill Meridian
4
The Death Chart Margaret Millard
9
The Schapelle Corby Verdict Dymock Brose
18
Earthquake & Tsunami in Indonesia & the Gulf of Bengal Nicole Girard
22
Pope Benedict XVI Virginia Reyer
36
The Qualities & the Temperament Ken Gillman
39
Predictive Astrology: Helpful or Harmful? Leda Blumberg
62
The Musical Correlations of a Natal Chart Gerald Jay Markoe
69
The Eighth House John Frawley
79
Where Are My Large Scissors? Ruth Baker
83
On Astrology 85 Bertrand Russell
H
These Considerations
OW quickly joy can become sadness. On Wednesday 6th July 2005 Londoners were celebrating their city’s selection over four others to host the 2012 Olympics; the next day they were in shock as a series of bombs exploded, tearing through three underground trains and a double-decker bus in a coordinated terror attack during the morning rush hour, leaving the city stunned and bloodied. Three separate explosions, virtually simultaneous, occurred in underground trains in central London. The first was at 8:51 a.m. BST in a train in the tunnel between the stations at Liverpool Street and Aldgate. Subsequent explosions occurred in tube trains at Kings Cross and the Edgware Road. Above ground, at 9:47 a.m., an explosion tore open the roof of a crowded double-decker bus.
The above figure is cast for the moment of the 2005 a Ingress at the seat of the British government (51N30, 0W07), no great distance from the sites of the different explosions. Less than a year ago, in Considerations XIX: 4, we commented on the efficacy of these a Ingresses. Although not discussed in print, we were very concerned at the time with just what the 2005 a Ingress foretold for London and Paris, the w S t being on the horizon at these cities. Now we know, but we also are aware that the astrological year that began on 20th March 2005 has another eight months yet to go. The w, always the significator of the nation’s people, has just risen, a
2
Considerations XX: 3
few arc minutes away from opposing t. e, ruler of the ingress’ 3rd house of transportation and its 12th house of secret enemies, is S y, an opposition highlighted by its proximity to the meridian and by being a multiple of 15º from the w S t. There are several sharp contacts between this 2005 a Ingress chart and the 1st January 1801 chart for the Union of Great Britain with Ireland. London, ruled here by the exalted q at the MC, defeated its opponents in the open contest (u, the 7th ruler, is weak by sign and house) to decide which city would host the 2012 Olympics, it was then attacked by secret enemies (e elevated but hidden under the sunbeams in the 10th house rules the 12th and is disposited by the exalted t in ¦, which in turn opposes the w, significator of London’s citizens). The w A q on 6th July, the day before this terrorist attack, occurred at 14º 31’ f. It squared the ingress e S y, and the Ascendant at London was A y and S t, forming a cardinal T-square to the new moon. 7/7 now joins 3/11 (Madrid, 2004) and 9/11 (WTC, 2001) as dates that will not be soon forgotten. There is a wide variety of articles in the Considerations you have in your hands. They aim both to entertain and educate. The issue begins on rather a somber note, as has this introduction, with Bill Meridian looking to future martial activities in the Far East; the inclusion of a wellreceived past article by our late friend Margaret Millard in which she discusses the death chart; a thorough review of last December’s tsunami by Nicole Girard; and Dymock Brose looking into the astrology of legal matters, a subject also discussed in Let’s Consider. All is not entirely under the influence of t, u and “, however. We have music from the pen of New Age composer Gerald Jay Markoe. Following this change of mood, Virginia Reyer looks at the horoscope of the new pope; Leda Blumberg (a name familiar to readers from our Books Considered section) questions the sense of astrological prediction; while John Frawley and Ruth Baker remind us of the strength of traditional astrology, with emphasis on Horary. There is also a lengthy and (I hope) helpful explanation of how to identify an individual’s temperament, how he or she tends to react in all and any situation, with several examples. Identifying someone’s temperament was once an automatic first step, taken whenever an astrologer had to delineate a natal horoscope. This was done before anything else, even before considering how the different planets were placed among the houses. The traditional approach fell out of favor because it wasn’t as accurate as it should be. The method explained here, starting on page 39, is an approach that includes what tradition seem to have had right and corrects what was in error. Finally, a comment on astrologers of the past century by one of the time’s leading philosophers. It could have been written today. —Enjoy!
3
The Next China-Taiwan Crisis: 2009-2012 BILL MERIDIAN
R
ED CHINA has vowed to take back the Republic of China, or Taiwan. In the May 2004 issue of the Boom, Doom, and Gloom Report, Dr. Marc Faber1 points to a report by Wendell Minnick, the Taiwan correspondent for Jane’s seven’t tell me what is going to happen, tell me when. This is the province of astrology. By studying the four thirteen US airmen shot down over China in the Korean War to jail terms. President Eisenhower refused to 1
4
Website is websiteboomdoomgloom.com
Considerations XX: 3 twenty-three miles from a Taiwanese port. The crisis ended when Beijing said that they would not invade Taiwan but were still committed to one China.
5
Meridian: The Next China-Taiwan Crisis: 2009-2012
The military exercises were meant to intimidate the people of Taiwan in the run-up to the presidential election. On March 23, 1996, the people of the Republic of China on Taiwan defied Beijing by electing Lee Ten statements made by the President during crisis #4 are important. He regarded Taiwan’s government as having begun with the Chinese Republic (ROC) on January 1, 1912. The communist People’s Republic of China (PRC) began on October 1, 1949. Note that the qs in these two charts are square, from ¦ to z, an indication of their basic ideological conflict. The determination of an exact time for Taiwan is tough because the Nationalists fled the mainland over a period of time. On July 16,
6
Considerations XX: 3
1949, the Nationalists organized a council under the leadership of Chiang Kai-shek and prepared a withdrawal to Taiwan (then called Formosa), a move that was completed by December 8. This July 16 horoscope is relevant because it did acknowledge the fall of the mainland government and the establishment of an island government and is thus a modifier while the 1912 chart is a base chart. Both this chart (q in f) and the 1912 (q in ¦) chart have planets in f-¦.
During the first crisis period, we saw eclipses in cardinal signs, f-¦, and “ afflictions. The ROC chart had the eclipses conjunct and opposite the q. “ was in hard aspect to the natal and progressed qs, t, and the MC. In the PRC chart, the eclipses squared the natal q. Progressed t was A “. The eclipse that occurred later in the crisis hit i-“, as the USA threatened use of the a-bomb. During crisis #2 in 1958, there were again eclipses in cardinal signs. “ was parallel the natal ROC t and o. There were also “ and i afflictions to the 1949 chart. In the aggressor PRC horoscope, we see i transiting the natal t A “ while progressed t was semi-square to progressed i. “ squared the MC. The crisis again ended with the USA threatening force as i sailed over the PRC natal “, an indication of potential destruction. The 3rd crisis in 1995-1996 began with the 1994-1995 early x-s eclipses and ended as the eclipses slipped back into cardinal signs. The s eclipse series was a repeat of the 1948-1949 eclipse series when the Republican armies were driven off of the mainland. Thus, the earlier eclipses were re-ignited as were the issues that they caused in ’49. Note
7
Meridian: The Next China-Taiwan Crisis: 2009-2012
that the crisis was caused by a statement of the desire for independence as the ROC had its i return. The desire for freedom usually does grow strong at such a return. In the PRC chart, we find that “ is afflicting t and o, and is moving over the MC. Crisis #4 in 1999 occurred when ROC had a u return. A solar eclipse squared the natal t, and another squared progressed u. The q had progressed to square its own natal place and oppose the PRC natal q. One eclipse squared the PRC Midheaven while the other was on natal “. Red China’s ambitions were further fueled by natal “ going retrograde in March of 1999, making it very powerful. i opposed the PRC’s natal t A “. This eclipse series did not occur in cardinal signs, but “ was strong again. However, this was the mildest of the crises. The crises were marked by solar eclipses in cardinal signs and strong “ influences. In 2009, the eclipses will return to the f-¦ axis, and “ will cross into ¦. In 2011, an eclipse opposes the ROC q, and “ begins to move over the q in 2012. Regarding the PRC, “ begins to oppose i, and the eclipse opposition to “ squares the communist q in the next year. In the following years, “ will oppose the USA q. The eclipse of January 2011 opposes the USA q, likely drawing America into the conflict. On August 5, 1949, the USA ended aid to Taiwan. America pledged to defend Taiwan on February 9, 1955, but the Taiwan Relations Act of April 10, 1979 dropped recognition of USA’s age-old ally, as subsequent American leaders acceded to Beijing’s desire of one China. Both the 1955 and the 1979 charts have t in a as well as a general cardinal flavor. Thus, cardinal activity will also activate these charts. Horoscopes set for the beginning of diplomatic relations or first treaties are also useful. Below we see the July 3, 1844 Treaty of Wanghsia between the USA and China. Note again that the q and four planets are in a cardinal signs. The planetary activity that will be energizing the prior charts will also energize this one, suggesting a USA-China crisis as it did during the Korean War and the Tibetan invasion. A series of eclipses in cardinal signs again dominated the heavens when Zhou En Lai called for a drive to take Taiwan on August 13, 1954. The conclusion is that 2009-2012 will precipitate the next ChinaTaiwan crisis. I doubt that Beijing would risk doing so prior to the 2008 Olympic Games.
8
The Death Chart1 MARGARET MILLARD M.D. Death is the means of transportation from world to world. All Beings, gods as well as men, must pass through every world until they come to the Last of All. —The Mobiniogen, the great epic saga of Wales.
A
S THE BIRTH CHART describes our journey and fate in life, occultists say that the death chart shows what we have achieved and, by comparing it with the birth chart, what difficult aspects we have overcome. I wonder if this is true. Next, if the occult belief that the soul's accomplishments in life are shown by the death chart, do good people die under benefic aspects and wicked people under malefic aspects? How do transiting planets at death aspect the birth chart? One would expect that the planets would be benefic for good people and vice versa. It appears that people who have made a significantly beneficial contribution to the world do die under good aspects. The charts of Tito and Winston Churchill both had Grand Trines at death, and so had the great astrologer Brigadier Firebrace. Rajif Gandhi also had a Grand Trine at death. He took up a burden which he had not sought because he felt it was his duty, and paid a bitter price. I have found certain factors which seem to indicate death. I wonder if death in an accident, as in a plane crash, is shown in the chart by the astrological indications common in the charts of those who die naturally. Most of us fear death because we have lost our faith, yet in human history this was not always so. I am thinking now of Socrates who told his friends that he had a good daemon who warned him when anything bad was going to happen. "But," he said, as he drank the draught of hemlock, "my daemon has not spoken to me, so I know that all will be well." Yet the astrological indications are never clear, and, indeed, there is a conspiracy among astrologers to avoid even thinking about death. There are several times in the life when it seems probable but is inexplicably averted, and for this reason it is not possible to foresee the time exactly. 1
Previously published in Considerations VII: 3. Reprinted in fond memory of a dear friend who has now moved closer to the Last of All. 9
Millard: The Death Chart
Certainly if death is astrologically foreseen the matter should be kept to oneself. However, sometimes the indications are so strong that death will be almost certain, especially if the patient is very old or very young. I hold to the Rule of Three. Never foretell an event unless there are at least three astrological indications. I knew when my father was going to die. In June 1982 there was an eclipse in 27°40' f, exactly square his Midheaven which was A i. I noticed that a few months later u A “ would transit over his Midheaven square the eclipse point. He was 91 years old. I knew he wouldn't live through it. The day the conjunction was exact was the day he died. I was with him. u was in 27°40' z, and “ in 27°37’ z. A few minutes later the phone rang. It was the daughter of his oldest friend asking about him. We said he had just died. "Yes, I know," she said. "I saw him pass. And I want you to know something, because he never believed in survival. As he passed I heard him chuckling." It was a peaceful and painless death after thirty hours in coma, and he was a good man, so he had nothing to fear. Here are my eighteen indications for impending death. I think that eclipses and the Rule of Death are probably the most significant, and also the Parts of Death I and II in the solar return. For this you have to have an accurate chart, which you cannot rely on having. A year ago I foretold the death of a neighbor by the eclipses in January and June, and said he would die either in January or June: he died quite suddenly on Jan 10th, just after the eclipse. 1. There is a very old Rule of Death. It states that when the ruler of the Ascendant or the Ascendant degree, and the ruler of the 4th house or the degree on its cusp reach a Ptolemaic aspect with each other by progression—direct or converse—the life will come to an end. Parallels also count as aspects. This usually works but not always. Nearly always! 2. The progressed chart shows a Full w or a New w. Both indicate some kind of change. The New w is commoner. Sometimes it is the progressed q which is conjunct or opposite the natal w, sometimes the progressed w is conjunct or opposite the natal q, and sometimes the progressed w and progressed q are in conjunction or opposition. 3. The lunar return (corrected for precession) before death has angular malefics. 4. The progressed chart has one or more quincunxes, and so has the death chart. 5. Transiting $ makes an aspect to one of the natal planets or to the
10
Considerations XX: 3
Ascendant or MC, or to a planet (usually a malefic) in the progressed chart. 6. The solar return (tropical) has the q near the nadir, usually in the 4th house, which indicates that the vitality is low. But death is not always associated with low vitality. Sometimes it is due to an accident. The house held by the q in a solar return is a very important factor. You will find that the 10th house is often occupied, particularly in the case of well-known people. Tito at his death had the solar return q in the 10th house. Sometimes the q is in the 7th. Here it opposes the Ascendant and there is a struggle between life and death. At any rate, it is usually angular. 7. The solar return has angular malefics. 8. The solar return has an afflicted Ascendant, either the lord of the Ascendant is placed in the 6th or 12th, or is a malefic or in close aspect with a malefic, or the Ascendant degree is afflicted. The ruler of the Ascendant degree is the Lord of the Year. 9. The solar return may have a New w or Full w. A New w is more common. 10. The Age Harmonic has planetary positions which are in hard aspect to the natal planets, especially involving t, u, and “. The ruler of the 8th house is often afflicted by malefics, and also the hyleg, whether q or w or Ascendant. 11. A recent eclipse falls within a degree of an aspect to one of the Lights, the 8th cusp, the 12th cusp, the angles, or their rulers. Sometimes the eclipse is A, D or S t or u, the takers of life. The orb must be less than a degree. The aspect can be the 22.5° aspect, as well as the D, Z, X or V. This is almost infallible. 12. The eclipse the same number of years before birth as the age of the native may also foretell death. The arrow of Time has no preferred direction seems to be a law of Nature. 13. Malefics transit the angles of the natal chart. (This is how Charles Carter is supposed to have foretold his own death.) 14. If either t or u progressed by the degree-for-a-year symbolic method make a hard aspect to the hyleg death will be likely. 15. Death will probably occur when by the Rule of the Septenary a malefic or a natally-afflicted planet is ruler of the seven-year period. The Rule of the Septenary states that the first planet rising after birth rules the first seven years, and so on, until the age 49
11
Millard: The Death Chart
when the planet that was first to rise rules for just two years, then the one that was the seventh to rise will repeat from 51-57, and the sequence then continues but in reverse direction. Only the seven original planets are used. 16. The Part of Death I and II (Asc plus 8th cusp minus w, using equal houses, or Asc plus 8th minus u) in the solar return aspects the hyleg or 4th or 8th house or rulers of the same. 17. “ by transit or progression afflicts a personal point, usually the q. 18. By Topocentric Primary Directions a malefic aspects an angle.
D
LOCKERBIE
O THE charts of the victims of mass catastrophes, who die together all show the indications of death in their charts? I have studied two of the charts of those who died at Lockerbie, and in one many indications for death are present, but there are few indications in the other, and her natal chart is not afflicted. Of course, the victims of the Lockerbie air crash all had the same death chart. Their day charts are all different. Perhaps we should pay charts more attention than we do. Prenatal or converse transit day charts are also valid—and are even more specific for each person. The Arrow of Time has no preferred direction. The question I asked myself was: is death preordained for everyone who perishes in a mass disaster, or are deaths due to catastrophes under different rules? Figure 1 is the birth chart of one of the victims. Her name was Mary Lincoln Johnson, and she was 25 years old. She had graduated from a prestigious University and had just spent fourteen months touring India and teaching English in Taiwan. Those who worked for the State Department had been warned of a possible bomb attack and canceled this flight. Many students who wanted to be home for Christmas got stand-by seats. Maybe she was one. Maybe she was not meant to die. Ronald Davison used a chart which, as far as I know, is pretty well unknown by other astrologers. It is the day-for-a-day transit and is set up for the date of the event but for the time and the latitude and longitude of birth. Figure 3 is this Davison-day chart of this young woman. Figure 3 is her solar return chart, which I find the key chart for events of the year. First, was her natal chart seriously afflicted? The 11th degree of g rose, and applied to a square to o in 13° x in the 4th house, and a trine to y in 15° a in the 9th house, one of the fortunate houses. The natal positions of these two planets were occupied by “ and t at the time of the 12
Considerations XX: 3
crash, in quincunx to each other, and therefore t by transit was A y ruler of the Septenary, and natal y was also afflicted by transiting “. Her Ascendant ruler, the q, was in 22° d in the 9th house. In the solar return (Figure 2) notice that i is A u in 29° c. Now look at her day chart: the q has come to 29° c, to conjunct those solarreturn malefics. The solar return is the key chart for looking to the fortunes of the coming year. It is interesting that r rose just before the q on her last birthday. This, the heliacal rising of r, which happens every 585 days, was supposed by the ancient Mayas to be extremely unfortunate. They thought that r, disappearing behind the q, was changed into a demon. Natally, it seems a strong and rather fortunate chart, except for i A t A “ in the 2nd house. As the 2nd opposes the 8th, there is a suspicion of death by violence. The q is Q t (72°, a good aspect, bestowing brains and talent), trine a stationary retrograde u (giving a sense of duty and an ability to stick to a job), and square the w. The w is in the 8th house, and opposed by the last New w eclipse on 11th September—not so good. In fact, six out of seven Lockerbie victims had the eclipse in 18°43' h aspecting a natal factor: a planet, house cusp, or l. The only close parallel of declination in the chart is between u and o, an extraordinarily difficult contact. At death i and the q were parallel and conjunct.
13
Millard: The Death Chart
14
Considerations XX: 3
The natal q is in e's sign, and e is in 29° s with the Pleiades (also known as The Weepers, and considered unfortunate by most authors). e is badly afflicted. It is D i and Z y. e was also the most afflicted of all the planets in her last solar return. It was not a fortunate solar return. Transiting u and i were quincunx natal e, and the quincunx is a death aspect. You can say that she did, indeed, have a chart which hinted at the possibility of violence. Natally, t is A “ in the 2nd house. At the time of the bomb which brought down the plane t and “ were again in aspect to each other, signifying danger. So were there any indications for death? Yes, there were the following: 1. The Rule of Death. This applies. Regressed r and regressed q are parallel. There is one other connection between the ruler of the 4th and the lord of the Ascendant, the q. The regressed secondary chart has 22° n on the 4th cusp. This is square the natal q. There is a Ptolemaic aspect between them. The secondary progressed and regressed charts (Figures 4 & 5) are set for the birth place. Alexander Marr found the converse (regressed) chart for days before birth more valid for death than the direct secondary (progressed) which most astrologers use. Transiting l at death is in 22º n, conjunct the 4th cusp. The 4th cusp is the end of life and the l releases the energy of whatever it contacts. 2. The secondary charts do not show a New w or a Full w. 3. The precessed lunar before death had only one angular malefic. i was in the 7th house, also the q. The precessed lunar's ASC was 9° d, square natal “. 4. The progressed chart does have quincunxes, as does the death chart. t at 20° h is V y at 18° a, and V u at 22° b. 5. Transiting $ at 4°50' f and declination 16N32 is parallel natal e. 6. The solar return q is not angular. 7. The solar return has 21° g rising and 12° s on the MC. No malefics are angular, but natal u is in 23° g. y is in the 10th house in 22° s, also the w in 13° d. 8. The Lord of the Year is the q. It is in 22° d, applying to the i A u in 29° c. As noted earlier, r rises just before it, heliacally. It is not otherwise afflicted, but, as noted, the Ascendant degree is opposite natal u.
15
Millard: The Death Chart
9. The solar return does not have a New or Full w. 10. The Age Harmonic for the 26th year (her age was 25 years 6 months) has u in 28° f square natal MC, and i in 14°59' h is square the Age-Harmonic Ascendant and opposition natal $ at 14°54' n. 11. The eclipse of 11th September 1988 was at 18°43' h opposing natal w in the 8th house. In the last solar return the L is in 18°27' h conjunct the eclipse. 12. The eclipse of 14th May 1938 was at 22°30' x, quincunx natal q. 13. Malefics do not transit the angles of the natal chart. 14. Adding or subtracting a degree for a year to natal t gives nothing of significance. But adding a degree for a year to natal u brings it to 18°33' n, the exact position of natal w in the 8th house. The w is not hyleg, but this aspect has to be significant. 15. y is the ruler of the Septenary. It is natally afflicted by the quincunx to o, modern ruler of the 8th house. 16. The two Parts of Death in the last solar return are 8°41' ¦, and
16
Considerations XX: 3
22°01' d conjunct the q. 17. “ does not aspect any personal point. It is transiting at 14°15' x. The regressed secondary chart is not favorable. t is in 23°11' g opposition and contra-parallel natal u at 23°06' b. Transit o at 9° ¦ has just completed the semisquare.
In the direct secondary i is parallel natal t. This threatens catastrophe. u A w is rising, trine natal q. Another aspect of violence is transit t in 15° a exactly semisquare natal e, and this is more like what we would expect for "danger in travel." Curiously, the chart of the crash resonates with Mary's chart, for the two Ascendants are the same within a few minutes of arc. Her hyleg is the q. The rule is that for death to occur it must be aspected by either progressed t or u. Regressed u is within 11' orb of a trine to the natal q, and a trine is an aspect which facilitates. y at the crash was conjunct the regressed q at 27° s. At death an aspect of y with the q is by no means unusual. Does it mean there is an expansion of consciousness at death? o seems very actively connected with “, Lord of Death, and perhaps this is because the poor victims were hurled into the astral world so suddenly. Mary's natal o in 14° x is conjunct transiting “, and takes part in the death Yod formed by t in a sextile the w in d and both quincunx “. o often plays a part in death charts, but it is not usually involved with 17
Millard: The Death Chart
other planets as in this case. Perhaps this marks victims of mass disasters. I found it in the other Lockerbie victim's chart which I examined closely. In the chart of the crash of the plane t was the most elevated planet, in the 9th, V “ in his own sign x, an aspect indicating violence and terrorism. The q was conjunct i and u in ¦ and it was thirty-six hours before a Full Moon in f exactly S i. e was D t—"Danger on journeys" as e rules travel. Mary's death occurred at the time that was fated. The afflictions to e, the last eclipse in 18° h opposite the natal w in n in the house of death, and the fulfillment of the ancient Rule of death all indicate this. The Part of Death was also conjunct the q, the hyleg. But remember that y was conjunct regressed q at her death, and in the Lockerbie chart set for the birthplace r is the most elevated planet. Death may not be a great misfortune for, after all, it is the means of transportation from world to world, and all of us must pass through every world until we reach the Last of All.
The Schapelle Corby Verdict DYMOCK BROSE
T
HE GUILTY VERDICT against Schapelle Corby was delivered by the chief Judge, Linton Sirait, in the Court Room at Denpasar, Indonesia, on May 27, 2005 at 10:40 am (local time). This was the outcome of Ms Corby’s trial for allegedly importing 4.1 kilograms of the drug marijuana from Sydney into Bali on 8th October 2004 when Ms. Corby was arrested. The verdict is perceived by nearly all Australians as manifestly unjust as well as totally lacking in humanity—apparently a standard feature of Indonesia law with its incredibly harsh sentences unleavened by any trace of mercy. I do not propose here to pursue that aspect of the case, but to glean what we can from the event chart about Schapelle’s future. As the event was initiated by the chief judge I take the 1st house as representing him and the opposing 7th house as representing the accused, 27-year-old Schapelle Corby. g rising with ruler q in the double-bodied sign d is certainly appropriate as the chief judge had two companion judges to assist him in arriving at a verdict.
18
Considerations XX: 3
g rising with ruler q in the double-bodied sign d is certainly appropriate as the chief judge had two companion judges to assist him in arriving at a verdict. y is trine the q, signifying that Judge Linton Sirait acted honorably according to his lights—he is a Christian and a religious man. Unhappily for Ms. Corby those “lights” and the Indonesian legal system are totally at variance with Western legal concepts and the presumption of innocence until guilt is proved. Schapelle Corby’s significators are u ruler of the 7th House, its occupant o close to the cusp, and r, the natural significator of young women. The verdict in a Horary chart would normally be taken as represented by the 4th house, but not necessarily in an Event chart. Here I regard it as the 5th house—the creative output of Judge Sirait. “ on the cusp is a feature of dire import for Schapelle, as it opposes r—and it is coming closer by mutual application. The ruler of the 5th is y, trine to the q, as previously noted. So the verdict was not based on spite or prejudice—just a different legal system. “ S significator r represents destruction of Schapelle’s normal prospects and way of life. Taking the 7th cusp as her Ascendant, the 12th house of the Chart becomes her 6th, which bodes ill for her health: u is there in its detriment, opposing and disposited by the w in Schapelle’s 12th. The w and u in Mutual Reception only makes the opposition stronger. There is no doubt Schapelle’s health will deteriorate in prison and lead to hospitalization. The l’s in the same degree as Schapelle’s significator r, is another ad-
19
Brose: The Schapelle Corby Verdict
verse factor well-recognized as having an evil effect. The maleficent star Algol is at 26º14’ s, little more than 1º from the Midheaven. This bears directly on the outcome for Schapelle as it is the 4th House from her Ascendant (7th Cusp). From the accused’s point of view the chart could hardly be much worse. One positive factor for Schapelle is the mutual reception of r and e which assures her the support of numerous friends. The 6th house of the chart, being the 2nd to the 5th, shows the immediate future following from the verdict. It holds no comfort for Schapelle. The w there opposed by u threatens continued restriction of liberty, confirmed by u’s square to the ls lying along the axis of legal proceedings. The best that can be said for Schapelle from the event chart is that the applying trine of the w to e in s will bring enduring support from friends and supporters. She will probably need this to keep her sane—and alive. When will her incarceration end? The sentence is 20 years. Though with “good behaviour” (and bribes) the term may be reduced to about 11 years which is quite enough to kill her. An Indonesian prison must be a hell-hole for a woman accustomed to the food and amenities of Western civilization. From the cyclic charts I see no prospect of her being granted a Presidential pardon.
However, the Solar Return chart of 27 May 2006 (drawn from the verdict Chart) holds good prospects of Schapelle’s condition being changed very much for the better during the year. The principal factor is r ruling the house of confinement trine to “, co-ruler of the house due to the intercepted sign x. The w and q in conjunction and ruling respectively the 9th and 10th
20
Considerations XX: 3
Houses mark the year from 27th May 2006 to be a turning point effecting a change in her legal status. As i trines y in the 12th house this looks like a beneficial change of domicile—very probably a removal to an Australian prison as arranged by international negotiations (t and u in 9th). For Schapelle, imprisonment in her home country Australia instead of Indonesia would be like a gift from the gods ‘a consummation devoutly to be wished’.
June 2006 has a series of secondary progressions plus a station of i at 14º43’ n which point to this month being the critical one. But even if it does bring a beneficial change in her situation, a year is still a long time to suffer in an Indonesian penitentiary, and it would not conclude her sentence. She would have to complete the term in Australia. Schapelle Corby has become an ill-fated victim caught in the toils of a foreign, machine-like legal system. If she is innocent, as most Australians strongly believe, the offence against her is an enormity of wickedness. If she is guilty, she still has not merited the unmitigated harshness of her sentence. In this event I would attribute the extreme consequences to transgressions against the Law of Love committed at an earlier stage of her spiritual evolution—in other words, Karma. “ in the 12th house of her nativity (see below) in close opposition to the Vertex certainly suggests this.
21
Earthquake & Tsunami in Indonesia & the Gulf of Bengal 26th December 2004
NICOLE GIRARD Not foreseeing is somehow weeping —Jules Verne
D
ESPITE scientists telling us that we can't, we are able to use astrology to successfully foresee earthquakes. This is something that is not understood in our rationalist civilization, despite many examples of significant scientists, such as Ptolemy, Galileo, Copernicus, Tycho Brahé, Kepler and Morin de Ville-franche, all of whom made extensive use of astrology. Newton, reckoned a genius by everyone, was an alchemist before anything else. Knowledge is not a scientific monopoly. Knowledge is also obtained through the arts, literature, philosophy and religion. Economic forecasters, who are very trendy nowadays, do not reach the skills of good astrologers. Astrology is one of these ways by which we discover knowledge, we do so based on universe's unity and our means of perception. We are linked to wholeness through the Cosmos, which leads to the interactivity of phenomena. This may explain the existence of some other ways of divination, but astrology is precise and based on mathematics. Ptolemy explained his method of predicting future earthquakes in the Tetrabiblos: it works fairly well and I’ve been testing it for many years. Using this approach we can detect periods during which there is a high risk of a major earthquake, often pin-pointing the very day of the event. I was, for example, able to forecast, based on the solar eclipse of 11th August 1999, which contained a close opposition between the q and i, that as the transit q moved to make further aspects to i there would each time be a major earthquake, and this was true 75% of the time over the next six months. An earthquake in Athens occurred on 7th September 1999 when the q was V i, and another in Taiwan on 21st September 1999 with the q X i. When identifying the astrological cause of an earthquake, one must start with the solar eclipse that occurred before the event. Its chart is to be studied by identifying the dominant aspects in it. A planet in close aspect to the q is one of these. When these aspecting planets are again in aspect with the q, the event has strong chances to occur. For instance, the solar eclipse prior to the earthquake in Indonesia on 26th December 2004 occurred on 14th October 2004. It was observable in Siberia, 22
Considerations XX: 3
Alaska, over the Pacific Ocean but not in Indonesia, so we don't know the length of the darkening time over the country where the event occurred.
At the time of the solar eclipse of 14th October 2004 we did not know an earthquake will later occur in the north-west of Indonesia but a chart cast for that area is certainly meaningful, see Figure 1. The most elevated planet, r, the planet of balance, is in h the sign of its fall. It is placed in the middle of the sky, is the dispositor of the five bodies in z, and is the exaltation ruler of the underground 4th house. r has two difficult aspects, from o and i, planets that represent the sea and brutal outbreaks. In this chart r is poorly balanced. Because of the angularity and afflictions of r one must fear a strong earthquake, which often occur in Indonesia. The placement of the average – in f close to the cusp of the 8th house is often the sign of a strong mortality. The true – is not shown in Fig. 1, its position is 18º12’ d in the 7th house, S “ and trine the eclipse. An eclipse itself is evil by nature, it is the removal of light, and here it unbalances the sign of z. The closeness of the debilitated t, which is A y and D –, suggests something rather excessive. The q is in its fall in z and here, besides being eclipsed by the w, it is D u. With u also in the sign of its fall this q D u is the worst possible aspect, and it is not at all modified by the q’s sextile to the rising “ or its square to $. Astrologers should note which planets aspect the q in Figure 1—there are five: t, u, “, $ and the true –—and closely watch them as they transit and make further aspects to the transit q, for it is at these times that powerful
23
Girard: Earthquake & Tsunami in Indonesia
earthquakes are most likely to occur. On 23rd December 2004 there was a very strong quake (rated 8.1 on the Richter scale) in the northern Macquarie Island (50S14, 160E08) in Antarctica to the west of New Zealand. It was the final day of q A “ being within orb, the q at 2º ¦ is now separated by 10º from the conjunction. In the chart for this earthquake (Figure 3) t rises close the Ascendant, S w. u, the 4th ruler, is exactly on the Midheaven closely opposed by $ from the cusp of the 4th house (the house representing the underground). The w is again aspecting u, as it was at the earlier eclipse. The q is exactly opposing the true –.
Two of the planets aspecting the eclipse, “ and the true –, were again aspecting the q at the time of this earthquake in the Antarctic. However, the other three planets that had aspected the q at the 14th October 2004 eclipse are now involved in close aspects with the angular w. This suggests that in order to be as precise as possible in predicting major earthquakes, one must identify transiting repeat aspects involving both the q and the w, the two lights that are involved in the eclipse itself. Such precision would involve much work and possibly a team of astrologers, but it is well worth doing. I had previously identified 23rd December 2004 as one of several days in which there was a high risk of an earthquake, but had expected it to be the final day of that risky period—the q-“ aspect was about to widen to beyond 10º—and neglected to the aspects of the minor bodies, $ and the –. This was an error because the waxing w brought back the q-w-“ aspect, and caused the high-risk period to continue.
24
Considerations XX: 3
Figure 2 is cast for the moment of the earthquake of 26th December that caused the deadly tsunami. The q and Ascendant are in ¦, with u, their ruler, at the Descendant, S $. The q in the 12th house of catastrophes is opposed by the w. This is the day of a full w, which occurs some fourteen hours later at 3:07 pm UT. It is also the day when the true lunar nodes (not shown) moved from the s-x axis to a-z. The ls are conjunct the MC-IC axis and have the property to create motion. The L at the MC always represents a lack; in z it is a lack of balance. The l at the IC indicates movement in the underground, while its dispositor, t, having just entered c and applying to D i, indicates a brutal outbreak. r, the most elevated planet at the time of the prior eclipse has now reached 11° c, the Ascendant in Figure 1, the chart for the 14th October 2004 eclipse at this location. The loss of equilibrium prefigured by the placement of the debilitated r at the Midheaven of the eclipse chart manifests itself on 26th December, the day that transiting r crossed the eclipse’s Ascendant. As also happened at the moment of the Antarctica earthquake of 23rd December, u and $, two of the planets aspecting the eclipsed q on 14th October, are angular and closely oppose each other. Here u is the ruler of the Ascendant. The q A “ itself has moved out of orb, there is now 12° between the pair, but their midpoint at 28° c is being opposed by the w from 28° d, so the q-“ pairing is very much involved. The w is moving from S “ to S q: amplifying the tide to which it
25
Girard: Earthquake & Tsunami in Indonesia
brings a deadly aspect from “, even though tsunamis are not linked to the tide. The w entered f at 4:39 am UT, moving to aspect u and the – within that sign. Eighteen minutes earlier, at 4:21 am, there was another earthquake (7.1 strong), this time in the Nicobar Islands (6N53, 92E56). As we saw in the two previous charts, the 4th house ruler is at the MC—an event underground brought into prominence. One of these islands, Indira Point, would have been swallowed by the 10-meter high tsunami at this very place. The full w takes place at 3:07 pm UT, at which time more and more casualties are being reported, further and further distant from the quake’s epicenter. The lunar Apogee (the w A the true –) occurs at 07:15 pm on 27th December by which time many of the wounded from the tsunami were dying. The Ascendant in Figure 2, is 22° ¦. Two hours after the earthquake struck o arrived at the eastern horizon, at which time the tsunami wave hit India and Sri Lanka. It took another six hours to reach the coast of East Africa. o in the 1st house, inflated by its trine from y, together with the full moon at its apogee, and the ls on the angle, combined to expand the size and range of this deadly tsunami. If we list all the aspects to the eclipse of 14th October 2004 we find the following: Planet t u $ True – “
Aspect to Eclipse A D D F G
Activity at Quake Di Sj Aj Sq A q separating
With only two of the five planets aspecting the q, and one of these two, the A “, already out of orb, the approach I had been advocating failed to predict the day of the earthquake. I now realize that aspects between the w, the other light in the eclipse, and the five planets should have been studied. An accurate prediction of the earth’s loss of equilibrium and stability could have been made by the transit of r on 26th December over the eclipse’s Ascendant, but only if the chart for October eclipse been relocated to the earthquake location. The tsunami appears to have been more dependent on the w than on the q, because the number of casualties greatly increased as the w A u became closer and closer, at which time the tsunami crossed the Gulf of Bengal. The placement of u, $ and the ls on the angles points to the timing of the earthquake. The charts of the earthquakes that occurred in Indonesia and Antarc-
26
Considerations XX: 3
tica on the 23rd and 26th December are very similar, u S $ is angular in both. However, Figure 2, the chart for the 26th December quake and tsunami, has the ls on the MC-IC axis, making it more lunar, which may explain the creation of the deadly tsunami and the thousands of deaths it brought with it. Let's see how this method applied to the major earthquakes (Richter = 7 and above) that occurred following the eclipse of 14th October 2004. Date in 2004 23rd Oct
Time (UT)
Lat.
Long.
11th Nov
21:26
8S09.1
124E52.1
15th Nov
09:06
4N41.7
77W30.5
22nd Nov
20:26
46S39.4
164E42.4
26th Nov 28th Nov 23rd Dec
02:25 18:32 14:59
3S36.2 42N59.7 50S14.4
135E22.6 145E06.8 160E08.0
26th Dec
00:58
3N23.9
95E46.7
26th Dec
04:21
6N53.4
92E56.4
Location Honshu's west coast, Japan Kepulauan Alor, Indonesia West coast of Colombia West coast of New Zealand Papuasia, Indonesia Hokkaido North of Macquarie Island Indonesia's northwest coast Nicobar Islands
Richter scale 6.6 7.5 7.2 7.1 7.1 7.0 8.1 8.5 7.1
Although not considered a major quake, the one on 23rd October can be linked directly to the eclipse. During the eclipse season the q is still in conjunction with the ls and on the 23rd the w had come to trine this conjunction. There often is a major earthquake in the eclipse season. We then have a suite of five major quakes in November. That of 11th November had q A w A L A t with the q A Ascendant; it is the first day of the q F u aspect, the other aspects are q C “ and q F $. With the q aspecting each of the planets that had aspected it at the eclipse, an earthquake on 11th November 11th was strongly foreseeable. The 15th November earthquake occurred with r at 21º z, the very place of the eclipse in which r had indicated a lack of equilibrium. The quake occurred just as r rose up into the eastern sky at the Ascendant. Of the other planets indicated by the earlier eclipse, t is rising with the L in the 1st house, u is F q, “ is C q, and $ is G q from its position exactly at the IC. With so many indicators an earthquake was to be expected. On 22nd November the q entered c, F u. There is a close w X q. $ is rising in the 1st house. This quake is too difficult to foresee with just these aspects but the tight bi-quintile aspect between u and “ makes this period a very dangerous one.
27
Girard: Earthquake & Tsunami in Indonesia
28
Considerations XX: 3
26th November was the final day of the q F u. It was the day of a full moon, with the q and w opposed along the MC-IC axis. This whole chain of major earthquakes occurred within the period that q F u was within orb, bringing out the affect of u’s original square to the eclipse. With u in f, the sign of its detriment, the entire period of the trine was dangerous. This danger was increased with u being the exaltation ruler of the eclipse. The first and the last day of any aspect must be particularly noted—for this q F u these were the 11th and the 26th November, on both of which days there was a major quake—and not only the day on which the aspect is exact. 28th November had the ls' axis located precisely on the horizon, with the L at the Ascendant, and the w at 28° d S “; one aspect only, so the quake was not foreseeable, but it seems to herald the one of 26th December, when the ls ride on the MC-IC axis and the w is again at 28° d S “.
W
HY INDONESIA? There were 175,000 deaths. There are many common points between the mundane horoscope for Indonesia and the chart for the moment of the earthquake's, which emphasizes the principle that all phenomena are interconnected. For the mundane horoscope of Indonesia I am using the chart for the moment when Queen Julianna of the Netherlands signed the transfer of powers’ agreement at 9:22 am UT on 27th December 1947, cast for Jakarta, the capital of Indonesia. The chart contains a grand cross in cardinal signs, the q at 5° ¦ is squared by the w and t, and S i. t and i again closely aspected the q at the moment of the earthquake. The Ascendant at 11° d is exactly opposed to the 11º c Ascendant at the moment of the solar eclipse of 14th October 2004 (see Figure 1), which, as we’ve already seen, was the position of r at the moment of the earthquake (see Figure 3). The solar revolution takes place at 5:45 PM on 26th December, on the day of the earthquake. The dignified w is just 1° away from S q and moving to its apogee in f (the maximum time of lunar power, A –). The Ascendant is 21° z, the very same degree on which the key event of the year, the October eclipse, occurred. There are several tight connections between this solar return and the natal chart, they include: w SR D w R, t SR G t R, y SR A o R. The solar return’s MC-IC axis, relocated to the quake's location, is at 6° f-¦, on the return’s q S w. This shows the importance of the eclipse, w and q in this event. At Djakarta, u, ruler of 4th house is in the 10th, indicating an underground collapse. The presence of o in the 4th house tells us that the collapse happens beneath the sea. Underwater quakes are a frequent occurrence all along the Indonesian coast, and these are indicated by u and t in the 4th house of the Indonesian natal chart.
29
Girard: Earthquake & Tsunami in Indonesia
30
Considerations XX: 3
$, the body associated with providential rescues, is in the return’s 4th house. This may relate to the massive outpouring of international assistance that was received following the disaster. It may also indicate that such aid will not last long. In fact, Indonesia wishes the foreign helpers to leave. Recall Bam's earthquake? It occurred exactly a year before on 26th December 2003, killing 30,000 people. The collapse of this splendid medieval city prompted many feelings of solidarity. Billions of dollars were promised but only 17-million was ever received. The population continues to live in refugee camps because their houses have never been rebuilt.
W
HY THAILAND? There were 5,200 deaths (half of whom were tourists). Thailand’s natal Ascendant is 18º20’ d, which was the position of the true – at the time of the October 2004 solar eclipse. At the same time “ opposes the country’s Ascendant from 20º c. The eclipse itself at 21º z sextiles this sensitive point. Thailand has the q-w-“ connection by an in-between midpoint: the w from the Midheaven in n trines the q/“ midpoint in f. The well aspected but wide w A l in n points to the country’s important coastal resources that are so popular with tourists. e, the Ascendant ruler, is conjunct both r and the 2nd house “—one of the country's most important sources of income is generated by sexual tourism (800,000 minors are used as prostitutes). The October 2004 solar eclipse at 21º z squares Thailand’s “ and opposes its i. The country’s natal q at 2° f was opposed by the quake's q, and transited by the w a few hours after the catastrophe. The – was transiting over the country's q/“ midpoint.
W
HY INDIA? There were 16,000 deaths. In its natal chart the q A y A r in b are opposed by “ in g. The w is closely D q. The quake’s q was conjunct India’s MC. The sextile between i at 1° f at the IC and the w A true –, indicates the high frequency of high mortality catastrophes in the subcontinent. i rules the railways, aviation and explosions as well as brutal faults underground. It is the ruler of 8th house by exaltation, showing many deaths caused by accidents and catastrophes. The tsunami reached India’s east coast around 3 am UT. The w entered f at 4:39 and transited i at about 6:30 AM, the time that radios began reporting the disaster. India’s natal chart has an important, expansionist trine between t A L A o to the three benefic bodies in b. By the time of the December 2004 earthquake, o had separated from A y and was applying to A r, while the transiting y was moving to A o. These transits clearly indicate the economic boom India is currently experiencing. India wants to dominate the
31
Girard: Earthquake & Tsunami in Indonesia
32
Considerations XX: 3
Indian Ocean, is about to launch its first satellite and has achieved nuclear power and peace with Pakistan, but apparently has no seismographic warning system to advise of impending tsunamis, or maybe just doesn't care about the welfare of its citizens. This y F o also impacted on the growing of the wave, as the Ascendant of the quake transited the sign of the b during the second hour after the quake. The same trine also points to religious problems, for only the Untouchables, those modern slaves, are allowed to burn the bodies of the tsunami’s victims. On 26th December the beaches were crowded with people preparing to celebrate the full Moon with ritual communal bathing. The solar revolution has q A o at the MC, and the w transiting the natal Ascendant in a. The emphasis of both o and the w points to events connected with the sea. i is closely trine its natal position, and the semisextile from the w to i in the solar return will have enlivened both this i aspect to its own position and the natal w G i, and these in turn will have brought to life the deadly i S MC in India’s nativity. “ at 21º c sextiles the October 2004 position of the eclipse. India suffered fewer casualties than the other countries and seems to have been less concerned about its losses. Perhaps, due to the country’s immense size and diversity, the tsunami was just one more accident.
W
HY SRI LANKA ? There were 31, 000 deaths. h rises and q A u in d is at the MC in Sri Lanka’s natal chart. Both h and d are double signs ruled by e, which perhaps explains why the country is divided in two between Singhalese and Tamils. It was the first country to put children back into school (e), which it did soon after the catastrophe. The aspect linking to the quake is w A “, F q. The solar revolution w is at 2° f. The q/w midpoint is widely opposed to “, which revives the natal q-w-“ connection this year. r and t are also connected with these three, a connection that was also present in the natal chart. The w is conjunct the –, its apogee; on the day of the quake the w was back at this place, squaring natal “ and applying to transit over the natal r A t. Millions of people were displaced. The natal y D i is an opposition in the solar return, with y, the MC ruler, in detriment at the IC. There are many dangerous aspects but also indications of financial help with i, the natal 2nd house ruler, poised at the return’s Midheaven. At the time of the earthquake, r, the MC ruler, was at the Descendant of Sri Lanka’s solar return (this had the Ascendant at the earthquake location for the October eclipse). The q was then opposing the return’s w, and the w was crossing the return’s –, r and w, all of which explains the high number of casualties in such a small country.
33
Girard: Earthquake & Tsunami in Indonesia
34
Considerations XX: 3
A Review of the Facts Each of these four countries have q-w-“ connections in their native horoscopes. These aspects are present in the solar returns for three of them. They caused the earthquake. “ closely aspects the solar eclipse of 14th October 2004. The countries with the highest number of mortalities, Indonesia and Sri Lanka had a w A – (the w at its apogee, when it is most distant from the earth) either in its nativity or in its solar return. What Ptolemy says: 1) Examine the concerned countries that have a link with the sector of the zodiac in which the eclipse occurs. None of countries involved in the 26th December earthquake and tsunami are z countries, but they all have in common the key aspect of the eclipse. It is present either in their nativity or in their solar return: the q-w-“ and the w-– aspect. 2) Timing of the events: how long an eclipse lasts over the country. This worked in 1999, on the countries swept between the western Atlantic and the Gulf of Bengal by the eclipse of 11th August. In addition, if the eclipse is closer to the MC than to the Ascendant, the event should take place at least four months after the eclipse, which is not the case here where the event occurred within three months, but would apply were another cataclysm to occur between March and June. 3) The nature of the event. Many rulers of 4th house (ground and the underground) are present in the 10th house (indicating what will happen) in this matter of tectonic plates collapsing. The ruler of the concerned house must be surveyed, as well as its celestial state. A Suggested Approach for Predicting Earthquakes From the eclipse's chart identify the dominant planets, those that aspect the q. List these planets that aspect the eclipse and note when they will again aspect the transiting q. When several of these repeat pairings occur at the same time, it is a dangerous period and if the aspects are within a 1° orb this is likely to be very day of the earthquake, as in Athens on 7th September 1999 and in Taiwan on 20th September 1999 (see Considerations, XVII: 1). Finally, note the transits of the key planets to the eclipse’s angles at different locations (here, r at 11° c on 12th December 2004). Translated from the French by Vincent Rouffineau
35
Pope Benedict XVI VIRGINIA REYER
P
OPE BENEDICT XVI was born Joseph Alois Ratzinger on Saturday April 16, 1927 at 4.15 a.m. at Marktl am Inn, in Bavaria. He is a German-born ecclesiastic, the son of a police officer from a farming family in lower Bavaria. His family was against the Hitler regime but they had to keep this private to avoid imprisonment. Joseph and his brother George were enrolled in the Hitler-youth, which was mandatory. Teens and older folk, anyone capable to be of help to the Nazis were enrolled in the Volksturm Apteilung. They had to bring stuff to the front in anything with wheels, gasoline was very scarce. Walking back from the front into Germany the Volksturm had to take anything back to Germany that could be needed, even people. Many people were hiding in the woods, including me. I was born and lived in occupied Luxembourg during the war. Joseph was drafted at age 16 into the German auxiliary anti-aircraft service, he escaped and returned home. At the end of the war he was held for a few months in a US prisoner-of-war camp.
In 1946 the future pope began studies in philosophy and theology at the University of Munich. He was ordained as priest on June 29, 1951. In 36
Considerations XX: 3
1957, he qualified as a university teacher and began teaching. In 1969 he became a professor of theology. In 1977 he was appointed Archbishop of Munich, being installed on May 28, 1977. He was elevated to Cardinal one month later and became private advisor to Pope John Paul II. He is now Benedict XVI, considered to be genuinely pious, intellectually brilliant, blunt and passionate about the truth. He is fluent in several languages. His experiences during the war, seeing the horrors perpetrated by the Nazis were critical in forming his ideas about the role of the church, his job as Pope will be challenging. He was elected Pope April 19, 2005 at about 5.50 PM, Vatican time and took the name of Benedict XVI. y is conjunct his Ascendant in n. This gives a great deal of spiritual wisdom and mystical insight that allows him to penetrate the subtleties of human nature. It makes him a sympathetic, adaptable, ethereal and visionary personality, receptive to all kinds of influences. With y the co-ruler of n, there is altruism, kindheartedness, a love of solitude and quiet happiness. y in this position brings harmony, religion, law, justice, optimism, moral and religious aspirations, a moral character and qualities of leadership. The e/y midpoint at the chart’s East Point symbolizes contacts and relationships with others and is out among the general public. The antiVertex conjunct the East-Point is the manner of interaction with others and gains public notice. e conjunct the fixed star Scheat makes the Pope mentally keen but he can be criticized through his actions or statements. The q F k gives leadership ability and shows that he will gain prominence in this life; it makes him aware of his mission in life. There was a full w the day after birth at 26º48’ z in the 7th house, showing he will be able to integrate and illuminate others. The q F o, with o co-ruler of the Ascendant, puts greater focus on religion and spirituality; it nurtures inner conditions that were developed in his past, and gives him an intuitive understanding of others. The 6th house o’s semisextile to the 7th house Vertex, puts emphasis on his work and the services he performs in a spiritual way and demands many sacrifices. The same aspect shows that he also cares much for pets and animals, especially for domesticated animals, and says that they should be protected. q/o A IC gives deep inner experiences and the cultivation of soul-life. t D i indicates strong emotional tension. t is A l in a critical degree; anything at 0º has to do with changes or new beginnings.
37
Reyer: Pope Benedict XVI
The w last passed over o and its next conjunction will be to u. The u/o midpoint is 15º36’ z, close to the w, an indication of spiritual maturity and compassionate religious insight. It also enhances musical activity and religious tendencies—the Pope is a good piano player. w D “ there is a tendency to let the past die and to seek drastic changes in his life. The Vertex is S e and D t, which can feel like being cut off and not belong to the mainstream, it restricts partnerships and relationships. u is the uppermost planet, it gives power in the expression of languages, oratory, is a powerful degree. r C t at 29º, a degree of fate and a testing ground; the pair rule the intercepted signs in the 1st and 7th houses. r predominates, giving a melodious voice and musical ability, r F Vertex receives help from friends. r C t adds vitality to the emotional nature, renders t more refined, and gives an energetic disposition. r G e is grace and cheerfulness. t A l gives a desire for collaboration and cooperation with others. Natal t is out of bounds, it is karmic and he does his own thing. He was elected Pope April 19, 2005, the transit chart has 6º 07’ z at the Ascendant, opposed by e—a hasty selection—and by the r/i midpoint—a demonstration of love, in the presence of others. Progressed “ is conjunct the natal MC—he has an important spiritual mission to perform for the regeneration of the forces controlling society, event “ retrograde in the 3rd house trines the l and this will help him attain a higher level of expression. There is a collection of prophecies attributed to St. Malachy an Irish archbishop who lived during the Middle Ages. These prophesies, which provide brief descriptions to future popes, were kept secret until the end of the 16th century. Many have tried to match the mottoes with various popes. Many link Pope Benedict XV, who reigned from 1914 to 1922 to “religion laid waste" considering World War I. According to Malachy's prophesies the new Pope Benedict XVI, the 265th successor of Peter, the founder of Christ’s church, is described by the phrase gloria olivae or ‘glory of olives’. What can this mean? The Benedictine order has a subgroup known as the Olivetans and St. Benedict the founder is said to have prophesied that the last pope would be a Benedictine and would usher in an age of world peace. The olive branch is a symbol of peace. The new Pope took the name Benedict upon his election; he has strong ties with the Benedictine and has spent several retreats with them. Is that enough to fulfill the prophesy?
38
The Qualities & the Temperament KEN GILLMAN
A
TTEND MOST astrological conferences and chances are that you will be handed a name tag on which you will be expected to add the Zodiacal signs occupied at your birth by the q, w and Ascendant. This is an important ice-breaker, one we have that attendees at non-astrological conferences do not. Knowing someone’s rising sign and the birth signs of their q and w tells us an enormous amount of information. Much more than the name. We know immediately whether this is a person with whom we are likely to get along with, if it is someone who views the world in a manner similar to how we do. We tend to mingle with such people, they are our types. This information also quickly identifies those we probably should avoid; they won’t have much sympathy with our views and any attempted conversation will only lead to misunderstanding. The combination of the q, w and Ascendant, adjusted in turn by the appropriate annual, monthly and daily cycle, and by close aspects, helps us identify an individual’s temperament, how he or she will tend to react in any situation.
Qualities of the Moon The q sign, discussed in detail in the previous issue, points to the month in which a person is born. The placement of the w says when during the month this occurs. The qualities of the w remain the same irrespective of where on earth, north or south of the equator, someone is born. The seasons do not affect the w but its four phases do. The first of the four lunar phases extends from the w’s conjunction with the q until it has become a quarter w at which time the w has advanced 90º beyond the q. The w is young and like all youthful beings its qualities are a mixture of moisture and heat. In its second phase, as the waxing w moves from its first quarter to its fullness, the q-w elongation extending from 90º to 180º, the w’s quality is a mixture of heat and dryness. Following the opposition, the waning w moves on to its last quarter, the elongation enlarging from 180º to 270º. During this third phase the w’s quality is a mixture of dryness and cold. Finally, in its fourth and final phase, as the old w moves back toward the q, we have the time when the quality of the w is cold and moist. The waxing w is outgoing and extraverted, it is hot; the waning w is
39
Gillman: The Qualities
introverted and less sociable, it is cold. The closer the w is to the q the more feminine and fertile, she is moist; the more distant the more masculine and life-ending, she is then dry. Thus the w splits the month into four parts just as the q splits the year into twelve, and the sequence of the four divisions is similar, going from moisture to heat to dryness and then to cold. There is, however, one important difference. When the q is in a specific sign, ignoring the hemisphere seasonal change for the moment, it will always be in the same quadrant of the year—the q is in s only in springtime, never in summer, autumn or winter. This does not apply to the w, which can be in a particular sign in any of its four phases. Table 4:
w sign
a, g, c s, h, ¦ d, z, b f, x, n
Phase Modifications to the w’s Sign Quality
hot & dry HD cold & dry CD hot & moist H M cold & moist CM
st
w’s quarter
nd
1 Hot & moist HM
2 Hot & dry HD
3rd Cold & dry CD
4th Cold & moist CM
H+ D-
H+ D+
H- D+
H- D-
C- D-
C- D+
C+ D+
C+ D-
H+ M+
H+ M-
H- M-
H- M+
C- M+
C- M-
C+ M-
C+ M+
There are of course differences between the three w signs within the same element, just as there are between q signs within the same element—the fixed sign of an element having the most equal mix of the element’s two qualities, the cardinal retaining traces of the previous element, and the mutable sign already becoming attuned to the upcoming element—but for now we will ignore these. The notation D- in Table 4 tells us that this is a phase where there is less of the passion and other attributes associated with dryness than we usually expect due to the w’s presence in this particular sign. D+ says there is more passion, more tension, etc., than usual. M+ indicates a greater than expected adaptability. M- points to less sensitivity, less pliability. H+ suggests greater than usual energy and extraversion. H- says less enthusiasm than might be expected. C+ tells of more restraint, the person is shyer than is usual. C- indicates the individual is less introverted. 40
Considerations XX: 3
The Ascendant Daylight hours are warmer than those of the night. The quadrant from the Ascendant to the MC and its opposite (from the Descendant to the IC) are masculine and dry. The others are feminine and moist. As a result the qualities of the quadrants differ from the seasonal sequence we see in the year. To take this into account the qualities of the ascending sign are modified by the quadrant the q occupies. As the earth turns it brings different signs up to the eastern horizon, one after another. At sunrise the sign containing the q is at the Ascendant. The remainder of the 1st house and all of the 2nd and 3rd houses, as defined at sunrise, will rise to the Ascendant and continue on up into the daytime sky as the q ascends through the southeastern quadrant, moving from its sunrise position to the Midheaven. These ascending signs are coming into a masculine world that is hot and dry and they are modified because of this. Similarly, as the q descends in the sky from the Midheaven to its position at sunset, the signs that were on the 4th, 5th and 6th houses at sunrise will rise over the eastern horizon and move into the daylight. They make their appearance during the afternoon, coming into a hot and moist environment, and this will feminize them. Again, in the evening, between sunset and the q moving to the IC, the rising signs will be those that were in the 7th, 8th and 9th houses at daybreak. The heat of the day has gone and it is now cold and dry. This cold and dry environment will modify these rising signs. Finally, between midnight and the start of the new day, as the q moves through the northeastern quadrant, the signs that were on the 10th, 11th and 12th houses when this 24-hour day began will be rising into the night sky over the eastern horizon.1 The environment in these pre-dawn hours is cold and moist and these rising signs will be modified accordingly. 1
The last of the twelve houses to rise before the new day begins is the one that was appropriately numbered as the 12th at sunrise—an obvious fact that should answer those who are troubled by the houses being numbered as they are in a counter-clockwise sequence. 41
Gillman: The Qualities
Table 5 shows these modifications of each rising sign’s elemental nature according to the quadrant occupied by the q at the moment of birth. Table 5:
Ascendant Sign by q’s Quadrant
Ascending sign
Ascendant’s Element
a, g, c s, h, ¦ d, z, b f, x, n
HD CD HM CM
Modified by the q’s position by Quadrant/House SE (HD) SW (HM) NW (CD) NE (CM) 10, 11, 12 7, 8, 9 4, 5, 6 1, 2, 3 H+ D+ C- D+ H+MC- M-
H+ DC- DH+ M+ C- M+
H- D+ C+ D+ H- MC+ M-
H- DC+ DH- M+ C+ M+
Given what has been discussed to this point, the notation in Table 5 should be self-explanatory. We see, for example, that an individual born with a d Ascendant when the q is in the 7th, 8th or 9th house, in the southwest quadrant, is an extreme d, and this is in complete contrast to the more conservative version with the same d Ascendant born shortly after sunset when the q is placed in the north-west quadrant, in the 4th, 5th or 6th houses.
The Ascendant-Ruler & the First-to-Rise Planet When evaluating a person’s inherent temperament, how he or she will automatically react to any new situation, we take into account the qualities associated with the q, w and Ascendant, as has been explained. These three are in turn modified by any planet closely aspecting them. In addition, it has been the usual approach to include the qualities associated with “the most powerful planet.” Originally this most powerful planet appears to have been defined as the Almuten of the Horoscope, namely the planet most dignified at the rising degree, taking into account the rulers of the ascending degree’s sign, exaltation, triplicity, term and face. Later this changed and by William Lilly’s time it had become the Ruler of the Geniture, the planet deemed to be the strongest in the whole chart, as identified by a complex scoring system. I have compared these two approaches using the charts of many individuals, mainly family and friends, for whom I am confident that I truly know their temperaments and, while I find the Almuten of the Horoscope to be more helpful than the Ruler of the Geniture, I find that neither provides a greater insight into an individual’s temperament than does the ruler of the ascending sign. Accordingly, this is what I use. There is a tradition that states that the planet ruling the ascending sign tends to be largely ineffective when it is located in the 2nd, 6th, 8th or 12th signs from the Ascendant. At these times it does not aspect (in the tradi42
Considerations XX: 3
tional sense of the word) the Ascendant. I have not found that this diminishes the Ascendant-ruler’s influence on the temperament. Two of the examples that follow are of this type. Besides the Ascendant ruler, I find the planet that will next ascend over the eastern horizon after the birth has a great influence on how a person will react to any situation. This is something I’ve learned from my 30-plus years’ study of the Septenary. This first-to-rise planet (I only use the seven classical bodies) may not be the most powerful planet in terms of its dignities, angularity or elevation; it may be in its detriment or fall, retrograde, in a cadent house, or in any of the many other ways a planet is said to be weakened; but whether strong or weak the planet that next rises after birth always greatly influences how an individual responds to any stimulus. It helps that no scoring system or reference to a detailed table of dignities is required in order to identify either the Ascendant ruler or the planet that will rise next after the birth.
The Planets So far we have been considering the two lights, the q and w, in terms of their seasons, and have done so with respect to how the signs they occupy are modified by the annual solar and monthly lunar seasons. As such we have ignored the essential beings of the q and the w. The q is predominately hot with some dryness; hot because the q is obviously strongest in daylight hours when the heat is greatest, dry because dryness is the masculine quality and the q is the archetypal male. The w is the reverse, cold & moist, for contrary reasons. She delights in the night, which is cold, and she is moist because she is the essential female. Between them, very much as one might expect, the two lights possess all four qualities. The same four qualities are associated with four of the five remaining classical bodies. u is basically cold, t essentially dry, r inherently moist, and y predominately warm. See the appended discussion of the four qualities, previously given on pages 71 and 72 of Considerations XX: 2. y is essentially expansive; u is restrictive, slow and indifferent; t is willful; and r is ever permissive. Depending on the tradition one follows, u is defined as cold & dry by some and cold & moist by others, and the same contrariness applies to each of the other three planets: t is sometimes said to be hot & dry and on other occasions it is cold & dry; r can be either hot & moist or cold & moist; and y is said to be hot & moist by some and hot & dry by others. Rather than blindly adhering to any one particular tradition, I have attempted to work from first principles, as I understand them, adding the appropriate masculine quality, hot or dry, whichever is missing, to a planet’s essential nature when it rises before the q. When a planet rises
43
Gillman: The Qualities
after the q, I replace this as appropriate with cold or moist, the missing feminine quality. I give e the qualities of the signs it rules, using the same rules. Table 6 lists these variations. Qualities of the Planets
Table 6:
Planet r e t y u q w
Basic quality Moist Variable Dry Hot Cold Hot & Dry Cold & Moist
Relationship to the q Oriental Occidental Hot & Moist Cold & Moist Hot & Moist Cold & Dry Hot & Dry Cold & Dry Hot & Dry Hot & Moist Cold & Dry Cold & Moist These have been already discussed
l is as y, the L as u The above allocations differ from those given by Lilly. i is considered to be hot & dry. o is thought to be cold & moist. There is no agreement on the qualities of “, but cold & dry makes sense to me. These new-found planets are only used when involved in particularly tight aspects. I do not amend their quality combinations by their relationship to the q on the infrequent occasions these planets are used in defining temperament.
The Four Temperaments The calculation of temperament is a vital first step in analysis of the nativity. Skipping this stage leaves us ignorant of the nature of the beast with which we are dealing. No matter what the other indications in the chart may tell us about the nature, omitting this initial step leaves us ignorant of whether the nature in question is that of a tiger, a gibbon, a bull or a giant sloth. This is a distinction of no small significance. Most of what we read from the chart, and certainly all we read from it on the psychological and physical levels, must be read in the light of this initial distinction (if we decide that our native is a bad-tempered tiger we have more cause to worry than if he is a bad-tempered sloth!)—John Frawley2
2
The Real Astrology Applied (2004. London: Apprentice Books). p. 121.
44
Considerations XX: 3
Before discovering how to identify the temperaments from the birth chart, we should see just who these four types are. In summary:
The melancholic is ever serious, the sanguine light-hearted; the choleric wants to always take the lead, and the phlegmatic is content to let him. SANGUINE (Hot & Moist) A sanguine person is a free and liberal spirit, an agreeable, friendly individual who is apparently without a trouble in the world. He will act from good-hearted impulses and be sympathetic and benevolent. He can be the best of good companions, always entertaining, lively and cheerful, seeking joy for himself and providing amusement for others. He is bold, courteous, always optimistic, trusting that things will always end to his benefit, faithful and liberal. He is kind, generous and charitable, but also somewhat unprincipled and not too good at repaying his debts. He enjoys life and loves changes; but his purposes are easily deflected by any variation in circumstances. He should never become a judge for he finds the laws far too strict and he is too easily influenced by bad luck tales. According to Ramon Lull,3 sanguine people are naturally more faithful than others and more trusting. They are more willing to show what they know to others because they think more than others; they do not naturally hold secrets. They learn and understand quickly but soon forget. They do not fear poverty or need and naturally have more desire for infants, whom they love and want to have more than other men, whereby they multiply the human race. Fat and jolly of nature are those of sanguine humor. They always want to hear rumors, Venus and Bacchus delight them, as well as good food and laughter; They are joyful and desirous of speaking kind words. These people are skillful for all subjects and quite apt; For whatever cause, anger cannot lightly rouse them. They are Generous, loving, joyful, merry, or ruddy complexion, Singing, solidly lean, rather daring and friendly.
Regimen Sanitatis Salernitanum4
When unable to enjoy himself a sanguine person will become vocally discontented, a complainer. He gets downhearted when others are sad. When someone else is ill or in trouble he will feel true and unfeigned compassion, but then gets distracted and quietly slip away until circum3
13th century Spanish philosopher and prolific author. He created a symbolically rigorous astrology based on the qualities. His Treatise on Astronomy has been translated by Kris Shapar and published by Project Hindsight. 4 Written sometime during the 12th or 13th centuries. 45
Gillman: The Qualities
stances change. At his worst he can become childish and trifling. Nietzsche considered the sanguine to be Dionysian. He described the self-indulgence of this type as hedonistic aestheticism.
CHOLERIC (Hot & Dry) Next is the choleric humor, which is known to be impulsive: This kind of man desires to surpass all others. On the one hand he learns easily, he eats much and grows quickly; On the other hand, he is magnanimous, generous, a great enthusiast, He is hairy, deceitful, lavish, bold, Astute, slender, of dry nature, and of yellowish complexion. Regimen Sanitatis Salernitanum
Here we have a proud, strongly-opinionated somewhat rather superficial individual who is very conscious of his own worth and overly concerned with appearances. He will come across as an authority on whatever subject he is discussing, has a sense of honor, is very ambitious, and has excellent imagination. He wants to be recognized, for others to acknowledge his status in life and the value of his possessions—his wealth, his home, his family, his car, his college and his background—and can’t bear being ignored, for his opinion not to be asked and deferred to. He is a natural leader and delights in lording it over others, which he assumes is his natural right. Because of how he behaves others usually think well of him. He is quick-witted, bold, extremely self-confident, courageous, and quarrelsome, usually has very good manners and presents himself well. He can be the essential politician; all things to all people, altering his beliefs to suit his audience and circumstances. He is not at all unintelligent but may sometimes appear to have more understanding than he really has. There is haste and impatience in his manner, a cold heartedness, a lack of any true consideration for the feelings of others, and a somewhat domineering nature. He will become angry when things don’t go the way he would wish or when others fail to give him his full due. There is a certain detachment from the limitations of reality that can become intolerance and ruthlessness. Ramon Lull says the choleric is more quickly angered than another, and has greater anger, but the anger does not last long. He is quick to make decisions, and quick to undertake something and quick to neglect it; he is inconstant and always ready to do an about-face. At his worst the choleric will want to fight anyone he believes has insulted him and, when this is impossible, he will threaten to sue them in court, and often does. Should his vanity, his need to achieve a reputation and be noticed become over inflated he is likely to make a fool of himself and display childish tantrums. The philosopher Immanuel Kant5, a 5
In his Observations on the Feeling of the Beautiful and Sublime, 1764.
46
Considerations XX: 3
good source for descriptions of the temperaments, says, the choleric “is very much given to dissembling, hypocritical in religion, a flatterer in society, and he capriciously changes his political affiliation according to changing circumstances.” Kant goes on to say that the choleric person has some principles, unlike the Sanguine character who has none, but these are not principles of virtue but of reputation and he himself has no feeling for the beauty of an object but only for what the world might say about it. t is the planetary ruler of the choleric.
MELANCHOLIC (Cold & Dry) The melancholic is slow and methodical, a deep thinker with a welldeveloped imagination, an earnest, diligent, conservative, solitary man of principles. He is usually mistrustful and suspicious of others; and can be slovenly in his habits. Unlike the choleric, he is not at all concerned with what others may think, preferring to make up his own mind. He is introspective and is said to be privy to a deep understanding of the workings of the world. Indeed, he has a predominant longing for knowledge, a curiosity covering everything from the most insignificant things in everyday life to the most profound scientific investigations. He can be obstinate, rarely changing his ideas once he has formed them. He is indifferent to the direction in which the wind is blowing, to changes in fashion or politics. He makes permanent friends and keeps secrets. He is always truthful, hating lies and any form of dissimulation—he values himself and respects others too much to lie. He is rarely submissive, demands freedom, and abominates any form of slavery or bondage. He is a strict judge of himself and of others, and experiences dark depression. For Nietzsche the melancholic is Promethean, a libertarian who indulges himself on the basis of his own efforts and earnings, which naturally limits just how far his self-indulgence can go. There remains the sad substance of the black melancholic temperament, Which makes men wicked, gloomy, and taciturn. These men are given to studies, and little sleep. They work persistently toward a goal; they are insecure. They are envious, sad, avaricious, tight-fisted, Capable of deceit, timid, and of muddy complexion. Regimen Sanitatis Salernitanum
Ramon Lull says the melancholic can attain to great knowledge but he is suspicious and will constantly worry about the future. According to Kant, the melancholic is the one temperament that will act on his principles, “which is not necessarily good as it can so easily happen that one errs in these principles, and then the resulting disadvantage extends all the further, the more universal the principle and the more resolute the person who has set it before himself.” In other words,
47
Gillman: The Qualities
if you are going to act according to principle, be sure you have the right principles. At his worse, the melancholic is easily dejected and may seem to be perpetually sad and depressed, or he will become a fanatic for some cause or other. If insulted or if he feels he has experienced some injustice, he can become extremely vengeful, never forgetting a slight, and refusing to give up until he has coldly extracted complete satisfaction for the wrong he suffered. He can be superstitious and experience meaningful dreams.
PHLEGMATIC (Cold & Moist)
Phlegmatic types had a bad press in the 17th century. Culpepper6 described them as “Very dull, heavy, slothful, sleepy, cowardly, fearful, covetous, self-loving, slow in motion and shame-faced.” In fact, the Phlegmatic is the most practical of persons; he is an amiable fellow who concerned simply with self-interest and usefulness: what’s in it for me and mine? He is the diligent, orderly and prudent working man or woman, who will react—in his own good time—to the immediacy of any situation. He cares for what is useful and has little feeling for what others consider artistic or beautiful. If an action is unlikely to produce a profit he is unlikely to have much interest in it. Self-interest as such is neither right nor wrong. We don’t receive our food, clothing, cars or computers because the people who manufacture them are being benevolent, those who create these items are prudently trying to make a living, and we benefit from their diligence and prudence. Phlegm makes man weak, stout, short, And fat, while the blood humor makes men of medium build. Men of phlegmatic humor tend toward leisure rather than work, and Dullness of sense, slow movement, laziness, and sleep are typical. Those sleepish and sluggish men, who spit often, Are dull of senses and white in coloring. Regimen Sanitatis Salernitanum
Kant writes about the Phlegmatic: “But most are among those who have their best-loved selves fixed before their eyes as the only point of reference for their exertions, and who seek to turn everything around self-interest as around the great axis.” To Nietzsche the phlegmatic person is Epimethean, the most conservative of all temperaments. The four temperaments can be related more or less directly to the four major classes of people in early society. In the Hindu caste system, for example, the highest caste is the priestly Brahmin. Next follows the Ksa6
Nicholas Culpepper. Astrological Judgement.
48
Considerations XX: 3
triya, who were the warriors and rulers. Below them in the pecking order are the Vaisyas, the merchants, farmers and artisans. The fourth and lowest caste is the Sudras, the laborers and serfs. Outside of the caste system are the outcasts, the untouchables, who are polluted laborers and slaves. The same four classes existed throughout medieval Europe: priest, knight, merchant and peasant. This class division may help us better understand why the phlegmatic temperament was held in such low esteem among early astrologers. This lowest social class, the laborers, serfs and peasants, were unlikely to be profitable customers for the astrologer. If the phlegmatic are the laboring peasants, then the choleric can be thought of as the secular rulers, the aristocracy. The priestly classes are then related to the melancholic, and the merchants to the sanguine.
Finding the Temperament
Some examples of how to find a person’s temperament from the birth chart: First, the example Lilly gives in his Christian Astrology7. This is important as Lilly admitted that his recommended method failed to give the correct temperament for this individual. He carefully worked out that hot & dry predominated which pointed to his subject’s temperament being choleric, but then confessed that the merchant was really sanguinemelancholic. Let’s see how the approach I am recommending fares. 7
Originally published 1647. pp. 742-744 49
Gillman: The Qualities
The q is in an autumnal air sign, hence hot & moist in a cold & dry season, in symbols: H- M-. This is modified by its placement between r and e, conjunct both. r is occidental (cold & moist) in a hot & moist sign (thus C- M+). e is oriental, so hot & moist, in z a hot & moist sign, and so extra hot and extra moist, H+ M+. These modifications combine to make the q quite hot and very moist, H+ M+++. The w is in d in her third quarter, a hot & moist sign in a cold & dry lunar season (H- M-). She is G t, which is hot & dry in a hot & dry sign (H+ D+). I will ignore the lunar trine to the e-q-r group as being too wide. The modification of t makes the w very hot but with little moisture, H+ M- -. The Ascendant is in ¦ (cold & dry), which is modified by the q’s placement in the hot & moist south-west quadrant, giving C- D-. The Ascendant is squared by the e-q-r group. The q is hot & dry in a hot & moist sign (for aspects by the q or w we revert back to their inherent natures, ignoring the solar or lunar season), occidental r is cold & moist in a hot & moist sign (C- M+), and e is extra hot & extra moist (H+ M+). We must not omit the sextile from u. u is oriental, hence cold & dry, in a cold & dry sign (C+ D+). These all combine to further reduce the cold and dryness of the Ascendant, giving the ascendant as C- - D- -. Note that the basic quality of a ¦ Ascendant is cold & dry. The various modifications only serve to adjust the amount of this inherent cold and dryness; they cannot cause the cold to become heat or the dryness to become moisture. This approach is employed throughout. u is the ruler of the Ascendant. As we’ve just noted, it is a cold & dry body in a cold & dry sign, hence very cold and very dry, C+ D+. It is closely A “, another cold & dry body, which can only add to u’s melancholic nature here, but I will ignore it. As I use only Ptolemiac aspects when identifying the temperament, the quincunx from the group in s is also ignored. The Ascendant ruler therefore gives us C+ D+ The first planet to rise after the birth is u, which we have already discussed. We therefore have from the different indicators: q H+ M+++ w H+ M- Ascendant C- D- Ascendant ruler C+ D+ 1st to rise C+ D+ Cold and dryness predominate, but the pairing of heat and moisture is almost as strong. We can therefore say that the temperament of Lilly’s English Merchant is a mix of melancholy (cold & dry) and sanguine (hot & moist), exactly as Lilly confessed that it was. My study of the four temperaments shows that the mix of traits found for Lilly’s merchant tends to be typical, most people display characteristics of two or more temperament types, rarely just one.
50
Considerations XX: 3
O
Now for some modern examples: PRAH WINFREY has become an extremely wealthy woman (11th ruler in 11th G l in its exaltation sign, 5th ruler in 2nd), a successful actress, publisher, and talk-show hostess, but she has done so from extremely difficult beginnings: raised on a pig farm by her grandfather, raped at the age of nine, a mother at age fourteen.
Her q is in b, a hot & moist sign in a cold & moist season, hence only slightly hot but very moist, H- M+. It closely squares the oriental u, a cold & dry planet in a cold & moist sign (C+ D-), which greatly reduces the heat of her q and also tempers its moisture, making H- - - M. The q is also A r and G w. r is oriental, a hot & moist planet in a hot & moist sign, so very hot and very moist (H+ M+). The w is a cold & moist planet in a hot & dry sign (C- M-). Combining all of these modifications makes the q only slightly hot but very moist, H- M++. The w is in c, a hot & dry sign, in its final cold & moist phase, well balanced in all four qualities, H- D-. She is G r and G q. r is hot & moist in a hot & moist sign (H+ M+). The q is hot & dry in the same hot & moist sign (H+ D-). Combine these, and we can say the w is H+++ D- -, very hot with only a trace of dryness. The stated time places the end of c on the Ascendant, but this doesn’t sound right. y, the ruler of c, is very weak in the 6th house, which doesn’t agree with Oprah’s many achievements. An Ascendant in early ¦, with its ruler in the 10th house (success by one’s own efforts) where it is disposed by a strong t, seems more likely. That u is the exaltation
51
Gillman: The Qualities
ruler of the z MC confirms this slight change. ¦ is a cold & dry sign, and the q is in the north-east quadrant, a cold & moist place. As the Ascendant has no aspects, we have C+ D-. u is the ascendant ruler. It is oriental (CD), in x (CM), and squared by the q (HD) and oriental r (HM), both in a hot & moist sign. All of these modifications dramatically weaken u’s basic cold & dry qualities, giving C- - - D-. This time the Ascendant ruler and the next-to-rise planet differ. r will be the first planet to rise after the birth. r is oriental and so hot & moist, and she is in b, a hot & moist sign, which heightens both the heat and the moisture of r, making H+ M+ in our notation. r is A q, D u and G w, all of which boils down to H+ M, or thereabouts. The five factors provide: q HM++ w H+++ D- Ascendant C+ DAsc-ruler C - - - D1st to rise H+ M This is a well-balanced combination. The dominant temperature is heat and there is considerably more moisture than dryness. A lot of heat and a lot of moisture—Oprah is r personified. Thus, Oprah Winfrey’s temperament is predominantly sanguine (HM), but even so there are also important choleric (HD) and melancholic (CD) traits. Perhaps this is best summed up by saying she is not at all phlegmatic. This excellent combination tells us that she has ample energy for whatever she does with her life and that she instinctively knows when to act and when to relax. She is a feeling person, extremely sympathetic, sometimes moody, easily depressed and quick to anger, who enjoys being in the limelight. She is concerned about how others see her, but at the same time she invariably acts from goodhearted feelings. She is changeable and enjoys amusements, wants to be agreeable and to cooperate, and is truly concerned by the problems of others and empathizes with them, just as her carefullyscripted public persona suggests. Her reputation is of immense importance to her and she has a tendency to live mainly on the surface of things. She is very ambitious and believes in herself.
52
Considerations XX: 3
As a predominantly sanguine person she is animated, amusing, entertaining, gregarious, refreshing and spontaneous. She can also be a showoff and is somewhat disorganized and superficial. She has chosen to be a talk show hostess and entertainer, for which her temperament is ideally suited, and it comes as no surprise that she is very successful at this.
P
ope John Paul II, born Karol Wojtyla. The late Pope’s q is in s where it is A w. q in s is cold & dry in a hot & moist season, which is a perfect balance, noted as C- D-. The presence of the w, essentially a cold & moist body, here in a hot & moist sign (C- M+), brings back some coldness and further diminishes the s dryness. Combine these and we have cold with just a trace of dryness, C D- - -. His Ascendant is A t. z rises, a hot & moist sign modified by the q being in the hot & moist SW quadrant, H+ M+. t occidental is cold & dry, and it is in a hot & moist sign (C- D-). This slightly reduces the Ascendant’s heat and moisture, which we can describe as H M.
The w is in hot & moist d, and it is in its hot & moist first quarter, making it extra hot and extra moist, H+ M+. Its closeness to the q in s (H- D+) reduces its moisture, giving H+ M-. The w is applying to aspect oriental u in h (C+ D-), which further reduces both the w’s heat and moisture, making H- M- -. r is both the Ascendant ruler and the first of the classical planets that will come to the Ascendant following Karol Wojtyla’s birth.
53
Gillman: The Qualities
r will rise before the q, so she is oriental, making her hot & moist, but this is modified by (1) her presence in s, a cold and dry sign, which reduces both her heat and moisture; (2) her closeness to e, also oriental in a cold & dry sign (H- M-), replaces some of the lost heat and moisture; (3) her conjunction with the L, considered to be cold & dry because it is oriental, and (4) her square to occidental y, a hot & moist planet in hot & dry g, further replaces the moisture r lost. After these modifications we can say that the first-to-rise planet contributes both warmth and moisture. We therefore have from the different indicators: q C D- - w H M- Ascendant H M Asc-ruler H+ M+ 1st to rise H+ M+ Heat predominates over cold, and there is more moisture than dryness—this is so even if the Ascendant ruler is ignored. The late Pope therefore had a sanguine temperament. However, there also was some melancholia (from the q) in his nature. As a result, we can say, quoting Nicholas Culpepper’s description of an individual who is mainly sanguine but has a distinct element of melancholy in his make-up, that John Paul II was “much like unto a sanguine man, but not altogether so liberal, merry and bold, for he has as it were a spice of the inclination of the melancholy person.”8 From this we know that John Paul II was outgoing and energetic, a sociable individual who acted from good hearted impulses to reach out to others, his heart being moved with sympathy and benevolence for those in need, that he knew how to enjoy life, but that, unlike the pure sanguine person, he could be idealistic and well-organized, stubborn and implacable, holding to firm principles of behavior, and was indifferent to changes in fashion. The melancholic trait would stop him from allowing himself to be distracted from what he considered to be his correct course. The mix between the two temperament traits, the sanguine and the melancholic, made him a visionary who was outspoken against the pitiless persecution of ordinary people, it gave him the power to bring the church down from the mountain top and the energy to travel endlessly about the world, but at the same time the melancholic side of his nature 8
Culpepper’s Astrological Judgement, p. 146.
54
Considerations XX: 3
has made him declare that Jesus Christ was the only true way to salvation, which was a dismissal of all other religions. He was stubbornly insistent on orthodoxy within the Church, and he adopted a firm conservative stance against euthanasia, the death penalty, contraception (including condoms used to prevent the spread of AIDS), abortion, homosexuality, women ordained in the priesthood, married priests, and any hint of Marxism. Although the same melancholic element in his temperament came out with his firm political stands against communism and the invasion of Iraq by the USA, it was under his leadership that Catholics in the USA turned against the Catholic John Kerry and voted for the born-again George W. Bush.
P
OPE JOHN PAUL’s successor, Pope Benedict XVI, the former Cardinal Joseph Ratzinker, was born with the q, w and Ascendant in three different elements. The q is in a, a hot & dry sign in a hot & moist season, hence very hot but only a little dry (H+ D-). It only has one aspect the sextile from the occidental t, cold & dry planet in a hot & moist sign (CD-). The result is a choleric q, H D.
The waxing w is in hot & moist z, in its second quarter, hence very hot and only a little moist. The w has no aspects, so we have a sanguine w, which we can write as H+ M-. The Ascendant is in cold & moist n, modified by the q being in the north-east quadrant, which is also cold & moist, and therefore very cold & very moist (C+ M+). Its only aspect is the conjunction from oriental
55
Gillman: The Qualities
y, a hot & dry planet in a cold & most sign (H- D-), which doesn’t really alter the quality of this phlegmatic Ascendant, hence C M. y is both the next planet to rise and the Ascendant’s ruler. As we have seen, it is a hot & dry planet in a cold & moist sign, hence H- D-. There are no aspects. We therefore have from the different indicators: q w Ascendant Asc-ruler 1st to rise
H H+ C HH-
D MM DD-
Heat & dryness clearly predominate; he is choleric. The ruler of the Ascendant conjunct the ascending degree is a strong indication of conservatism and orthodoxy. Cardinal Ratzinger’s role for twentyfive years under Pope John Paul II was to act as Grand Inquisitor for Mother Rome. His task, one that seems perfect for this choleric man, was to correct theological error, to silence dissenting theologians, and stomp down on heresy wherever it reared its ugly head. He did the job with rigor and, consequently, earned a notorious reputation as “the Enforcer.” He has been openly critical of other religions, calling them “deficient” and sees homosexuality as an “intrinsic moral evil.” In his new role he is expected to continue hammering home his conservative views. Within a week or so of Joseph Razinsker’s elevation to the papacy the editor of a Catholic magazine was relieved of his position at the order of the Vatican, apparently because he had published views that opposed the Church’s stated position. It would seem that the new Pope continues to be the choleric champion of the faith, ready to stand against modernity, resisting what he has reportedly called “the waves of today’s fashions and the latest novelties.” The new pope is ideally suited to his new role. He will tend to be autocratic and somewhat arrogant, decisive, demanding and perhaps domineering, outspoken, self-assured, self-willed and perhaps intolerant, tenacious and stubborn; very much a ruler. As a choleric, especially one born with y at the Ascendant in its own nocturnal sign, he knows instinctively just what it takes to be the perfect pope, to be everything members of the Catholic Church expect someone in his position to be. 56
Considerations XX: 3
H
ILARY Rodham Clinton, the wife of the 42nd president of the United States and the current senator for New York, was born with the q in x, a cold & moist sign in a cold & dry season, thus C+ M-. The q has no close aspects. Her waxing w is in the final degree of n, a cold & moist sign in the second (hot & dry) quadrant from the q. As a result the w is only slightly cold and slightly moist, C- M-. This is modified by the trine from occidental y, a hot & moist planet in a hot & dry sign, H+ M-. The combination results in C- - - M: not really cold at all but with some moisture.
At the stated time the Ascendant is in the final minutes of d. Alexander Marr thought the correct time should be some two minutes earlier, which tends to confirm d as the likely rising sign. d rising with the q in the 5th house gives hot & moist modified by cold & dry, H- M- in symbols. The ascending degree is squared by the w, trine to the q, and closely A i. The w is essentially a cold & moist body and here it is in a cold & moist sign, so very cold and very moist, C+ M+. The q is essentially a hot & dry body, which is modified by being in a cold & moist sign, so slightly hot and slightly dry, H- D-. Normally i would be ignored, but here it is very close to the Ascendant and must influence the Senator’s automatic reactions. i is a hot & dry body and here it is in a hot & moist sign, so very hot and slightly dry, H+ D-. All of these different modifications tend to cancel each other out, leaving H M, the original hot & moist. e rules the Ascendant. e is occidental in x, hence cold & dry in a
57
Gillman: The Qualities
cold & moist sign, C+ D-. It is D u, A L and A r. u is oriental in g, hence cold & dry in a hot & dry sign, a little cold but very dry, C- D+. The L is occidental in x, hence cold & moist in a cold & moist sign, very cold and very moist, C+ M+. r is similarly cold & moist in a cold & moist sign, so she is also denoted by C+ M+. Combine these all together and we have an Ascendant ruler that is extremely cold but not too dry, C++++ D- - -. The first planet to rise after birth is t, a hot & dry planet in a hot & dry sign, very hot and very dry, H+ D+. It is D r and closely A “. r as we’ve seen is very cold and very moist, C+ M+. “ is a cold & dry planet, here in a hot & dry sign, so not too cold but very dry, C- D+. All together this makes the first planet to rise after birth H D+, hot and very dry. Here we have from the different indicators: q w Ascendant Asc-ruler 1st to rise
C+ C- - H C++++ H
MM M D- - D+
We find a temperament that contains all four natures: the q and w suggest she is phlegmatic, the Ascendant says she is sanguine, the Ascendant ruler points to melancholia, and t, the body that rises first following the birth, says she is choleric. It seems that she can naturally be all things to all people. One might have expected her to be choleric, the temperament one tends to associate with politicians. Overall there is considerably more cold than heat, and there is slightly more moisture than not. Senator Hillary Rodham Clinton can perhaps best be classified as having a phlegmatic-melancholic temperament. Culpepper describes this combination, one in which phlegmatic is stronger than melancholic, as being “neither very merry nor much sad; not liberal nor covetous; not much bold nor very fearful, etc.” Her phlegmatic side makes her adaptable, balanced, calm, diplomatic and calm; but also indecisive, reluctant, shy and self-righteous. The melancholic trait makes her analytic, idealistic, self-sacrificing, loyal and considerate; also unforgiving, moody, critical and fussy. She has strong principles and is a diligent, hard-working and very practical lady, who is
58
Considerations XX: 3
concerned primarily with herself and with those she considers to be a part of her, her family. She is likely to be cautious, slow and methodical in how she responds to new situations, taking care to explore all possibilities before taking any action. TRADITIONAL APPROACH The approach to finding an individual’s temperament that has been described here is somewhat different from the method most traditional astrologers employ. For completeness, therefore, an example of the traditional method follows. This is taken from John Partridge’s Miscropanastron: An Astrological Vade Mecum, published in London in 1679, pp 84-88.9 There is very little that is different between how this famous astrologer of the past goes about finding the temperament and how the method is described by Lilly, Gadbury and others. ‘It is well known what the natural philosophers call temperament or complexion and that is, according to the dogmatists, an ingenerate mixture of the four primary humors, i.e. blood, phlegm, choler and melancholy; but according to the learned alchemist, salt, sulfur and mercury, and of these humors there is an agreeable composition made, in such sort as it may agree to some special kind—and therefore there are diversities of complexions, agreeing both to special kinds, and particular things. ‘Hence there is an infinite diversity of humors in a man’s body, both good and bad, caused by the constitutions of their parents, and the manifold mixtures of the stars: nevertheless, there are four principle complexions corresponding thereto—first, Sanguine, which is moderately hot and moist; secondly, Phlegmatic, which is cold and moist; thirdly, Choleric, which is hot and dry; lastly, Melancholic, which is cold and dry—and these four complexions are known by the proper qualities and natures of the significators of the temperament, by their equal composition, in collecting by a certain order the testimonies of every one of the qualities, viz. of hot, cold, moist and dry, as shall quickly be more plainly shown. Therefore the significators of the complexion are: 1. 2. 3. 4. 5. 6.
The Ascendant and his Lord. The planet or planets placed in the Ascendant, or beholding the same with a partile aspect; among which the l and L are also numbered. The w. The planets beholding the w within orbs. The quarter of the heavens, or the sign the q possesses. The Lord of the Nativity.
‘The quality of the significators, and of the signs in which these significators are placed, must be examined according to the doctrine following—in which 9
An excellent reproduction of Partridge’s 348-page masterwork is now available from Ballantrae Books,.
59
Gillman: The Qualities observe this, that u, t or the L beholding the Ascendant or w in an ill aspect, doth discompose the temperature of the body, although all the rest of the significators are well placed. Qualities of the Planets u y t r e
Oriental Cold & moist Hot & moist Hot & dry Hot & moist Hot
Occidental Dry Moist Dry Moist Dry
But the qualities of the luminaries are liable to greater alteration, for w from the
A till the first D first D to the S S to the last D last D to the A
is Hot & Moist Hot & dry Cold & dry Cold & moist
The l is of the nature of y and r; the L is of the nature of u and t. The quality of the q is considered; firstly, according to the quarter of the year; secondly, by the triplicities. Spring Summer Autumn Winter a, g, c s, h, ¦ d, z, b f, x, n
q in
a, s, d f, g, h z, x, c ¦, b, n
Hot & moist Hot & dry Cold & dry Cold & moist
Fiery, hot, dry & choleric Earthy, cold, dry & melancholic Airy, hot, moist & sanguine Watery, cold, moist & phlegmatic
‘Then, having collected all the testimonies, both of the significators and signs, with their denominations of hot, cold, moist and dry, observe which exceeds, and judge accordingly the complexion; for if heat and moisture both exceed the other qualities in number of testimonies, the native will be sanguine; but if moist and cold, phlegmatic; if heat and dryness, choleric; and lastly, if cold and dry, melancholic. ‘Caution to the student: In collecting the testimonies of the four qualities aforementioned, take this advice: when one planet shall be Lord of the Nativity, and of the Ascendant, and placed in the Ascendant, he must be set down thrice in the collection of testimonies; so the w, if she shall be placed in the Ascendant, she must be set down twice; and so for the rest.’
By tradition the various factors are each given an equal weight and tabulated to find the qualities that occur most often. These, when combined, identify the individual’s innate temperament. The analysis of
60
Considerations XX: 3
Pope Benedict XVI’s temperament, according to the methods of Partridge, Lilly, Gadbury and others, would then proceed as follows: Pope Benedict XVI: Temperament Indicators (traditional approach) Factor Ascendant j Ruler j Aspects w w’s aspects q Nativity ruler
Hot
n rises y oriental y in n Ay y in n 2nd quarter w in z none Spring q in a y y in n Totals
Cold
Dry
9 9 9 9 9 9 9 9 9 9
-
-
9 9 9 9 9 9 9 9
9 9 9
9 7
Moist
4
1
10
By the above traditional approach, Hot & Moist clearly dominate, giving the new pope a distinct sanguine temperament. The same approach says the Pope Benedict XVI also has some sanguine (Cold & Moist) traits. This is quite different from the strongly choleric temperament indicated by the methodology described in this article. Knowing how an individual is likely to react in any situation is extremely important. Classifying people into the four main types is an important first step to analyzing who they are. It is essential that astrologers discover an individual’s temperament before moving on to interpret his or her horoscope if they hope to be correct in their interpretation. The supreme self-confidence and good fortune that is associated with y in n rising closely conjunct the Ascendant will be possessed by the new pope whether his temperament is sanguine or choleric. There is however an immense difference between a self confident, lucky man of the people and a self-assured aristocrat who has been blessed with good fortune. The traditional approach for identifying the temperament is rarely used nowadays. It cannot be relied upon. It is too often inaccurate. The approach described in this article aims to correct these problems and in doing so improve the accuracy of horoscope interpretations.
61
Predictive Astrology: Helpful or Harmful? LEDA BLUMBERG
A
STROLOGY is a universal symbolic language of metaphor and archetypes. The beauty of astrological symbolism is that it enables us to heighten our perceptions about life events and the intrinsic meanings behind them. As a tool of insight, its value is immense. Through astrology, we can gain deeper understanding about ourselves, our motivations, and our life events. But used unwisely, it can cause harm. Purely event-oriented astrology can take away personal responsibility and power. It can cause fear and dread of the future and what the more difficult transits may “bring.” Prediction is a powerful tool—but it must be used carefully and wisely. Dane Rudhyar, a pioneering astrologer who introduced humanistic astrology into the mainstream, stressed the importance of person-centered astrology, rather than event orientation. What is truly significant is the meaning behind events, not the events themselves. In her writings about Dane Rudhyar, Rael wrote, “we make events constructive or destructive according to the meaning we give them…thus instead of encouraging the prediction of events Rudhyar exhorts their constructive interpretation.”1 Rudhyar also stressed the importance of understanding cycles and seeing how transits and progressions are part of continuously unfolding cycles. All the planets are in cyclic relationship to each other and Rudhyar suggested that astrologers study these cyclic phases to gain greater understanding into the larger meanings of aspects, transits, progressions and lunations. The lunation cycle (the phase relationship of the Sun and Moon) can give insight into the phases of personal development. For example, the time of the progressed new moon is indicative of a time of new beginnings while the progressed full moon signifies a time of culmination. By studying the progressed lunation cycle, one can gain an overview of their personal life cycles and an understanding of phases of life experience. Rudhyar wrote, “As I see it, the first and immediate purpose of astrology is not to predict events in terms of statistical probability, but to bring to confused, eager, often distraught persons a message of order, of
1
Leyla Rael, “Dane Rudhyar and the Astrology of the Twentieth Century,” in T. Mann (Ed.) The Future of Astrology, Biddles Ltd., p. 15.
62
Considerations XX: 3
“form,” of the meaning of individual life and individual struggles in the process of self-actualization.”2 Many astrologers believe that fate can be altered through awareness and the willingness to make hard decisions, and to grow and change. When people can look at themselves with clarity and see that the choices they make are what create their destiny, not some unseen force or faraway planet, they empower themselves and aren’t victims of fate. Knowledge of upcoming transits and progressions can be of great benefit to people because it enables them to see where they are in unfolding cycles and it can help them make informed choices that work positively with the symbolism of the transits and progressions. Nothing just happens to someone. We are all co-creators of our own destinies and knowledge of planetary symbolism can help if it is interpreted wisely. Glenn Perry, founder of The Academy of AstroPsychology, wrote, “I believe that fate can be positively altered through a process of internal healing and integration. The real meaning of events is that they constitute “feedback,” information that reflects where the individual is at in terms of health and wholeness. And their real value is that they stimulate growth in precisely those areas where the individual most needs to change.” Perry emphasizes, “…what is important about a transit is not the event itself, but its meaning.” 3 If a client (or an astrologer for that matter) believes that the planets or stars cause events to happen and makes them behave in a certain way, then astrology can cause harm by removing personal responsibility. The planets reflect events; they don’t cause them. Each individual’s behavior, decisions and karma are partially responsible. People have the power to create and change their lives if they are willing to look deep inside and try to understand their motivations and the personal myths that guide their behavior. Locus of control is a psychological concept that refers to people’s perceptions of the connection between what they do and what happens to them. If people believe that things that happen to them are generally consequences of their actions (i.e., that there is a connection between their efforts and what happens to them), they are internally controlled. If they see no connection between what they do and what happens to them but instead attribute what happens to them to luck fate, chance, or powerful others, then they are externally controlled” 4 2
Dane Rudhyar, The Astrology of Personality. Doubleday, 1970, p. xv. Glenn Perry, “Psychological Versus Predictive Astrology,” in An Anthology of the Journal of Astro-psychology. AAP Press, 1997, p. 5. 4 Stephen Nowicki, Dan Adame, Thomas Johnson, and Steven Cole, “Physical Fitness as a Function of Psychological and Situational Factors,” in The Journal of Social Psychology, 137, 1997, pp. 549-558. 3
63
Blumberg: Predictive Astrology: Helpful or Harmful?
People with a strong internal locus of control feel they are able to affect or create their own destiny. They feel that what happens to them is at least partially due to their own efforts. People with an external locus of control believe that what happens to them is caused by other people and situations beyond their control (such as planetary transits). They feel that they are victims of circumstances and don’t fare nearly as well in times of crisis. Someone who believes that the planets make things happen to them are externally controlled and less likely to take charge of their lives and their destinies. According to Rudhyar “whenever a person feels that planets are entities that influence him and cause things to happen, good or bad—such a person is psychologically hurt by such a belief. …All such beliefs in outside powers influencing, or (as is usually the case) compelling, the personality in this or that manner constitute deteriorations of individual selfhood; they lead to psychological slavery through fear or through transference of the power of initiative, or colloquially speaking, through “passing the buck” to some entity outside the self.” 5 Of course, many people embrace both an internal and external sense of control. Astrologers can clearly see a connection to planetary symbolism and actual events. But the planets don’t cause events; they simply mirror them. The symbolism within a birth chart unfolds over time and predictive tools like transits, progressions and solar-arc directions can indicate the timing of certain developments. But there is always a wide range of possible manifestations of any astrological event and by understanding the symbolism and archetypal imagery, one can use this information in a growth-oriented way. Yes, bad things do happen. No one can avoid unpleasantness and conflict. But through meaningful insight, it is possible to grow and evolve in a positive way under even the most difficult circumstances. There is always a danger of predictions becoming self-fulfilling prophecies. The prediction can become a factor that influences a person’s decisions and behaviors, whether consciously, or not. Prediction may plant an unwarranted fear in the client’s mind that influences them in a negative way and this fear of the future can paralyze their ability to learn and grow. For example, if an astrologer predicts a difficult time ahead, their client may be overly cautious and may miss an opportunity to rise to a challenge which may lead to success and personal growth. Avoidance of difficult situations isn’t necessarily in someone’s best interest. Difficult circumstances give us opportunities to learn and grow. Challenges present us with the opportunity to further our personal development and they force us to develop inherent characteristics that may otherwise lie dormant. Growth, transformation and achievement can re5
Rudhyar, op cit., p. 474.
64
Considerations XX: 3
sult from working through, rather than avoiding, difficulties and when a difficult transit has passed, one may be grateful for what eventually came out of it. Steven Arroyo wrote, “This is one point about the transits of the trans-Saturnian planets that cannot be overemphasized: the long-term ramifications of these crucial change periods will not become apparent until we have the clarified perspective which only time will bring. The changes that happen during these periods are so intense and so concentrated, while at the same time their full implications on the total life are so subtle, that it is simply impossible for most individuals to assimilate within a short period of time the complete meaning of this transition from one phase of life to another.”6 Predictions of realms of experience, the types of issues one may have to deal with, the “flavor” of a particular time period, and where one is in a cycle can be extremely helpful; prediction of actual events is not. “If one expects a specific type of change, transition period or crisis of growth, he or she can prepare to meet it consciously and with open eyes, and may gain more from it in terms of personal maturity and spiritual unfoldment, wrote Alexander Ruperti in his classic book, The Cycles of Becoming.”7 My friend, Mary, is a good example of making positive use of an upcoming transit. I noticed that transiting i was due to conjoin in her natal e. We discussed possible implications and as a result she has kept an open mind to any ideas that interest her, regardless of how unusual or offbeat they may be. Without previous knowledge of this transit she may have ignored some of her sudden inspiring ideas, instead she grabs onto them and sees which ones may be useful. Under the influence of this transit, she pioneered a new publication that has become quite successful and has been creating new concepts that her clients applaud. At the same time, transiting y was conjoining her Midheaven and she prepared herself for long hours in her home office and took on many new projects. At times she felt overwhelmed by the abundance of work that this transit signified, but since she knew that this transit was part of a cycle, she made the most of this busy time and put it to productive use. Several years ago I was contemplating a skiing trip after spending thirteen months in physical therapy rehabilitating a knee that I badly injured while transiting u was in my 6th house opposing my natal q. I looked at the ephemeris and saw that transiting t was due to conjoin my natal i during the few days that I planned to ski. Feeling quite cautious due to my recent surgery and rehabilitation, I considered postponing the trip. Several astrology books warn about the potential for an accident 6
Stephen Arroyo, Astrology, Karma & Transformation, CRCS Publications, 1978, p.55. 7 Alexander Ruperti, Cycles of Becoming, CRCS Publications,1978, p. 9.
65
Blumberg: Predictive Astrology: Helpful or Harmful?
under the influence of a t A i. Not what I wanted to hear. However, I decided to look at this transit in a different way and realized that it could just as easily symbolize exciting physical activity. I decided to take the risk, went on this ski trip, and ended up having a fabulous, invigorating time. I broke through the fears I was harboring and was able to experience this transit in a positive way. Many people see astrologers because they want predictions. Oftentimes, they care more about what will (or may) happen than what a period of time means for their personal growth. They are more concerned with outcomes than process. So how does a humanistic astrologer work with these clients? “Certainly there is a place for prediction in astrology, but I believe it should be a psychologically enlightened prediction that focuses on the meaning of a transit as an opportunity for learning rather than an occasion for evasive action,” wrote Perry.8 An astrologer must ask themselves: How can I use my knowledge of astrology to empower my client, not take power away? How can this information be helpful? “It should be the task of the wise astrologer to help his clients and his students see that free will does not always exist in controlling outside events, many of which cannot be avoided whether or not one believes in karma; but that in his reaction to them lies his choice, his chance for individual growth and development,” wrote Richard Idemon.9 Several years ago, a friend came by so I could look at her chart. She was having marital difficulties and needed some additional insight. She was experiencing several major transits that pointed to the likelihood of a separation, for example, transiting i was in her 7th house opposed to her natal r. This wasn’t something I wanted to predict, yet I felt some responsibility to bring it into her awareness. It became clear to me from talking with her and looking at her chart that it was quite likely that her husband was having an affair – though I had no actual proof. This was a real dilemma for me. I wanted to help my friend, but I didn’t want to make a painful prediction, and in fact, there was no way I could be sure that it would manifest as I saw it. We discussed that this would be a period of change within her relationship and we talked about her marital issues and her relationship needs. I strongly encouraged her to seek marital counseling with her husband to work on understanding how this relationship had evolved to where it was and to explore what changes they could make to save the marriage. Would the marriage be saved? I couldn’t say for sure. My friend did seek therapy and it soon surfaced that her husband did indeed 8
Perry, op. cit., p. 3. Richard Idemon, “Casting Out the Demons from Astrology,” in Astrology Now, 1972, p. 26.
9
66
Considerations XX: 3
have a lover and he soon filed for divorce. Was I helpful? I think that by encouraging my friend to delve deeper into her relationship needs she became aware that, although quite painful, her current relationship problems were going to initiate an important period of independence and growth for her. She was able to move on with her life and eventually met another man with whom she is now deeply in love. According to Ruperti: “…no one, and especially not the humanistic astrologer, has the right to decide beforehand whether the results of a crisis will be positive or negative. The crisis may be internal or external or both.” 10 Weather forecasters look at temperature, wind direction and barometric pressure to make their predictions. They gather this factual data and interpret it based on their knowledge and past experiences. We can use this information to plan our activities or we can choose to ignore it. However, if you are planning an outdoor event, it would be unwise to ignore the weather predictions. If you’re going skiing, you probably don’t want to spend the day outside in freezing rain and if you’re planning a family barbeque wouldn’t it be nice to pick a sunny, warm day. Of course weather prediction doesn’t have the same type of long-range capabilities as do astrological predictions. Astrologers are only limited by how far into the future their ephemeris goes. Knowing predictive techniques can enhance understanding during difficult times. Several years ago I was plagued with a mysterious respiratory illness. I went to several different doctors and no one could explain why I was having coughing spells so bad that at times I was unable to get enough oxygen. I looked in my ephemeris and saw that transiting o was opposing my natal e. This, of course, didn’t diagnose or cure my illness, but it enabled me to gain a new perspective on it. I felt that the mystery would clear up once this transit passed, and I took a deep look at what this time period meant for me psychologically and emotionally. Understanding the transit’s symbolism enabled me to feel hopeful that this awful period would end soon and it gave me insight into how I might make use of this bad situation. At this transit’s third station, I received a letter from the summer camp my two daughters had attended the previous summer with the subject heading “mysterious cough!” They had just discovered that several campers who were sick over the summer actually had pertussis (whooping cough), a highly contagious illness. None of my doctors had even considered this, but thanks to this letter I finally had a diagnosis and was able to recover my good health. I have found that in many instances the significance of a major transit or progression isn’t clear until it has passed. I may have a preconceived idea of what a transit or progression will mean, but oftentimes the real 10
Ruperti, op. cit., p. 226. 67
Blumberg: Predictive Astrology: Helpful or Harmful?
meaning and the event(s) associated with it were unpredicted. I knew when transiting “ was about to conjoin my w that I’d be dealing with the surfacing of important emotional issues. I expected to be dealing with death (either literally or figuratively), and maybe a change in my physical body or emotional balance. I knew that I was entering an important period of time for my personal growth and decided to be as open as possible to whatever experiences would arise. I tried to prepare myself emotionally for potential loss – not to fear it, but to be ready to experience it. Indeed, this period did signify some death, loss and the need to “let go” so I could move forward. It wasn’t an easy period of time but due to my knowledge of astrological symbolism I didn’t resist the emotional growth that was forced on me. Several very important beings in my life died during this transit but through the pain I expanded my philosophy of life and grew emotionally and spiritually. Knowing astrological symbolism was a great comfort during this time and I am thankful that this understanding gave me insight into the broader meaning of this difficult period..” 11 Astrological insight can be invaluable for enhancing selfunderstanding and as a guide to help people find fulfillment by actualizing their potential. Understanding astrology’s symbolism helps to bring some order to an unpredictable existence, enabling us to consciously cocreate our destinies. When we learn to use it as an aid for understanding, rather than as a system that stereotypes and dictates destiny, we take control of our lives and allow our own free will to determine who we become and how we get there.
11
Ruperti, op. cit., p. 16.
68
The Musical Correlations of a Natal Chart GERALD JAY MARKOE
E
VEN THOUGH the zodiac has historically been partitioned into any number of segments, twelve has evolved as the basic principle used in Western astrology. Even though the musical octave can be and has been divided into any number of intervals, twelve has also evolved as the basic division in Western music: do, re b , re, me b , mi, fa # , fa, sol , la b , la, ti b , ti. Twelve astrological signs and houses each have their own dynamic quality; twelve musical notes. Each astrological sign has its polar opposite. If the sign is masculine, its polar opposite is feminine and vice versa. The masculine signs are active and the feminine signs are receptive. Musical intervals are the distances measured in semitones from one note to another. Each musical interv al also has its polar opposite or inversion as shown in the table below.
Musical Intervals C2 B A#/Bb A G#/Ab G F#/Gb F E D#/Eb D C#/Db C1
do2 ti tib la lab sol fa# fa mi mib re reb do1
8ve 12 semitones (up from C) M7 11 semitones (up from C) 1 semitone (down from C2) m7 10 semitones (up from C) 2 semitone (down from C2) M6 9 semitones (up from C) 3 semitone (down from C2) m6 8 semitones (up from C) 4 semitone (down from C2) P5 7 semitones (up from C) 5 semitone (down from C2) T 6 semitones (up from C) 6 semitone (down from C2) P4 5 semitones (up from C) 7 semitone (down from C2) M3 4 semitones (up from C) 8 semitone (down from C2) m3 3 semitones (up from C) 9 semitone (down from C2) M2 2 semitones (up from C) 10 semitone (down from C2) m2 1 semitones (up from C) 11 semitone (down from C2) 00 12 semitone (down from C2) m = minor, M= Major, b = flat, # = sharp, P = perfect
00 m2 M2 m3 M3 P4 T P5 m6 M6 m7 M7 8ve
For example, from C2 down to B is a minor second (m2) and from Cl up to B is a major (M7). If the musical interval is major, its polar opposite is minor, and each resolves in a different direction. ("Resolve" means that each note has a certain dynamic quality which pulls it either up or down; when it gets to where it is being pulled, it is considered re-
69
Markoe: Musical Correlations
solved.) In musical terminology, fifths and fourths are considered perfect, rather than major or minor. However, the fifth is larger than the fourth and resolves down, and the fourth is smaller than the fifth and resolves up.
Polar opposite musical intervals (each resolves in a different direction, as shown by the arrows) M2 M2 M3 M3 P4
▼ ▲ ▼ ▲ ▲ ▲ ▼ ▼ ▼ ▲ ▼ ▲
P5 M6 M6 M7 M7 8ve-
M7 M7 M6 M6 P5 P4 M3 M3 M2 M2 do-
▲ ▼ ▲ ▼ ▼ ▲ ▼ ▲ ▲ ▼ ▲ ▼
Actually they do not always resolve in this direction in musical compositions. When they do not resolve as they "are supposed to" an unexpected element of surprise is introduced to the music.
TEM PO & THE RELATIVE SPEED OF NOTES Planets have different speeds. e of course is the fastest, and each gets progressively slower through r, t, y, u, i, o, and “, the slowest. Music also has various tempi, ranging from presto—very fast—through allegro, allegretto, andantino, andante, adagio, lento, and largo, the slowest. The planets travel at different speeds. Music moves at different speeds. e r t y u i o “
presto, very rapid allegro, rapid allegretto, march time andantino, briskly andante, flowing adagio, slowish lento, slowly largo, very slow
In astrology, a planet may be rising, setting or stationary. In music, the movement of tones may rise, fall or stay in the same place. The tempo may he accelerating, slowing down or staying the same. In music, 70
Considerations XX: 3
the term "counterpoint" refers to two or more melodies playing simultaneously. The three types of movement that counterpoint may take are parallel motion, in which the two voices move together in the same direction; contrary motion, in which the two voices move together in opposite directions; and oblique motion, in which one voice remains stationary and the other moves either up or down. In counterpoint notes of different speeds can be playing simultaneously. 1st species counterpoint: note against note (each note having the same speed) 2nd species counterpoint: two notes against one (one part twice as fast) 3rd species counterpoint: four notes against one (one part four times faster) These three species of counterpoint are the basis of 16th century music (there are actually two more species that do not involve other speeds, which are not included here). In contemporary music, note values go up from whole notes which equal four heats, to 64th notes which equal 1 / 1 6 t h of a beat. This concept of counterpoint is analogous to the movements of the planets. Two planets may move together in the same direction, move in opposite directions, or one may be relatively stationary or actually stationary as another planet moves toward or away from it. In astrology, four kinds of signs range from the heaviest or densest sign of earth, through liquid which is water, then air and finally fire, which is the finest or lightest, or thinnest. In music, voices and instruments in groups of four range from the lowest to the highest. VOICES Soprano Alto Tenor Bass
STRINGS Violin Viola Cella Bass
WOODWINDS Flute Oboe Clarinet Bassoon
BRASS Trumpet French Horn Trombone Tuba
There are many more correlations between astrology and music which will be covered in detail in a forthcoming book.
THE WILL OF THE TONES Today's musical tones have evolved since the beginning of mankind, in primitive and civilized cultures all over the world. Primitive peoples instinctively produced the same basic tones that scientists and philosophers discovered intellectually. This is not surprising since the musical tones used today and for centuries all over the world are based on harmonics, the mathematical divisions of a vibrating string. The string is divided into halves, thirds, fourths, fifths, sixths, sevenths, eighths, and
71 //
Markoe: Musical Correlations
so on. The lower the number dividing the string the more universally used is the note derived from that division. For example, the string divided into two parts gives us an octave. Since women's voices are naturally an octave higher than men's voices, singing in octaves occurs quite naturally in any kind of group singing, whether primitive chanting or contemporary folk song. The string divided into three parts gives us the note sol or the perfect fifth. This musical interval of the perfect fifth is the most universally used interval. It is found in all types of music including Classical, Jazz, Chinese, Turkish. Indian, etc. The string divided into four parts gives us the note fa (žths of the string). This is the next most universally used tone. As the divisor increases, it is less and less commonly used. The mathematical ratios for our Western system of twelve tones are: do2 ti tib la lab sol fa# fa mi mib re reb do1
1:1 16:17 8:9 5:6 4:5 3:4 5:7 2:3 5:8 3:5 4:7 8:15 1:2
The lower the ratio, the less dissonant (tense, active, needing to move) the note. The higher the ratio, the more dissonant the note. When we hear music we are hearing mathematical ratios which correspond to many other naturally occurring phenomena. Of all these tones, do is the fundamental tone or center of gravity from which the other tones are derived and also the place to which they return. In any piece of music do is the final note and the center of gravity around which the other tones resolve. Each of these other tones has a definite will of its own and its own special place in the musical hierarchy. To understand the musical concept of "center of gravity,� we can use the analogy of a giant clock having one hand that weighs fifty pounds. The hand is not connected to any mechanism inside and can be moved freely. It naturally falls to the 6 o'clock position because of gravity. As the hand is moved in a complete circle, different dynamics of weight are experienced at different places. To move the hand from 6 o'clock to 7 o'clock will be considerably easier than moving the hand from 8 o'clock to 9 o'clock. Once the hand is in the 12 o'clock position 72
Considerations XX: 3
it is again "weightless" and can be balanced there. Past this position it will move by itself gaining in momentum until it reaches bottom. Each point in this 360° circle will have had a different dynamic quality. In the same way each musical tone has its own dynamic quality in relation to do. For instance, the reason that the audience at a concert knows the symphony is over is not related solely to the fact that the musicians have stopped playing. Rather, the music has reached a conclusion that sounds like a conclusion to the listeners. That conclusion derives directly from this “will of the tones.” Tones themselves are representations in sound of cosmic laws which manifest themselves in everything, including music, color, geometry, physics, and the human body. Thus can music be what it is to us— satisfying, unsatisfying, predictable, unpredictable, harmonious, dissonant, beautiful or ugly. Of course the art of music also depends on the state of the listener and their level of attention, conditioning, and familiarity with the work itself or with similar musical compositions.
HARMONICS Is there an objective connection between music and the heavenly bodies? Where is the hidden key to the philosopher's dream of the Celestial Music, the music of the spheres? The answer may lie in one word that is used in both the science of music and the science of astrology: harmonics. Ask any string player to explain a harmonic, and they will tell you that it is a note produced as if by magic from a certain place on the instrument touched very lightly. (Usually one must press down tightly to get a note to sound clearly.) This celestial sounding little note audibly occurs at the basic divisions of the instrument's string. The more simple the number, the louder and clearer the harmonic. The harmonics occur in order of loudness and simplicity of number: ½, ⅓ and ⅔; ¼ and ¾; 1/5, 2 3 /5, /5, and 4/5; 1/6 and 5/6 (3/6 equals ½ and 2/6 equals ⅓, etc.). At each of these points there is both a harmonic that results from a light touch, and a stopped note produced by pressing firmly. At the very basic numbers, these notes are one and the same. Beyond the simple fractions, the harmonic and stopped note produce different notes but have a definite relationship. Once past the basic numbers, the mathematics gets very complex. This stopped note and partner note of each harmonic both equal the whole. For example, a planetary angle of 60º degrees is also 300º, and an angle of 40º is also 320º. In the same way a note played on the guitar for example 1/6th of the way down the string, has a partner note of 5/6. It also has relative notes (as in family relatives) of ½, 2/6, (⅓), and 4/6, (2/3). The entire family of sixths is shown on the next page. The harmonic sol3 is produced by touching lightly at any of the divi-
73
Markoe: Musical Correlations
sions except at the point 3/6, (1/2), where it is drowned out by the louder harmonic do2 existing there, and at the points 2/6 (â…“) and 4/6 (2/3) where it is drowned out by sol2. This is because the basic numbers are the loudest, and decrease in volume by approximately 30% to 50% at each higher harmonic. For example, if do1 were given a volume rating of 100, do2, which is based on the number 2, would have a rating of 70; sol1, based on the number 3, would have a volume rating of 40; do3, based on the number 4, would have a volume rating of 20, and so on.
By pressing tightly at these points the following notes occur. At 1 / 6 we get sol 3 , which is identical to the harmonic since it is the family ruler. (All family rulers have stopped notes identical to their harmonics.) At 2/6 (1/3) sol2 is produced. Note that 2/6 is twice the length of 1 / 6 , thereby sounding the same note an octave lower. In other words in this family of 6 the law of 2 is also involved. As we go higher in number the interrelationships increase rapidly. At 3 /6 (½) we have do2, at 4/6 (2/3) we have sol1 and at 5 / 6 we have mib1. To sum up, each division of a musical string has a partner note, a harmonic, a stopped note, and a family of related notes. For example a division of 1 / 6 has a partner of 5 / 6 ; together they equal the whole. The harmonic at 1 / 6 is sol3, the stopped note is sol3, and the family relatives are do1 (all), 1 / 6 sol3, 2/6 sol2, 3 /6 do2, 4/6 sol1, and 5/6 mib1.
COSMIC LAWS Cosmic laws, the basic laws of the universe, are most simply represented by numbers. Every cosmic law can be expressed as a number or ratio which is the ruler of many other phenomena. For example, the number 3 has a corresponding geometric shape: the triangle and pyramid. It has a corresponding musical note which is the vibrating string divided into three parts, giving the note sol. It also has a correspondence with the white ray and the color spectrum, giving us three primary colors from which all other colors can be derived. It represents the division of the zodiac into cardinal, fixed and mutable signs. It has a correspondence in the human body, which can be divided into the lower, middle and upper parts; legs and pelvis, arms and torso, and head, each having a different function: mobility, dexterity and thought. The list of tripartite divisions is endless and is governed by the principle of "As above so below." The laws that govern the celestial bodies also govern life on earth and these laws may be observed in the most
74
Considerations XX: 3
mundane objects and events. Both music and the movements of the heavenly bodies exist within contexts circumscribed by the factors of time and space. One note is a certain distance from another note just as one planet a certain distance from another. The note moves at a certain speed in a certain direction in a way comparable to the planet's movement at a certain speed in a certain direction. In both cases there are many interrelationships. Converting planetary positions into music is a process of finding the correlations between points on a vibrating string and points in a 360º circle. The conversion is also linked to correlations of music with geometrical shapes, snowflake patterns and other phenomena. The basis of our formula is astrological and musical harmonics. (Other types of harmonics, such as geometrical harmonics and color harmonics also link seemingly unrelated phenomena with each other.) Illustrations of the cosmic laws One through Seven follow, in diagrams and lists of concepts. These laws can be used as visualizations or meditations, and can be further enhanced by musical experiences. For each cosmic law there is a musical note or notes, which, if played on the piano or another instrument while meditating on the law, can deepen the awareness and experience of the relationship between sound and the specific law. For do any note can be used, for example a low C on the piano. if do2 is indicated, it means to play the same note one octave higher. Do3 means the same note still another octave higher and so on. When do and any other note is mentioned, playing do and that note simultaneously gives the musical interval (the distance between the two tones). Do is always the 0°-360° point—the center of gravity and the place from which all notes come and the place to where they return. The cosmic law of one (unity and wholeness) is embodied in both the whole circle of the zodiac and the whole string. Each whole contains within itself, all parts. It is everything, in the sense that it contains within itself all divisions (½, 1 / 3 , ¼, etc.); and, it is nothing, in the sense that it is not any one of these divisions, it is all of them simultaneously and the boundaries or "sp ace" in which they occur. The undivided circle is the zodiac, which contains within itself the everything of all possibilities of the zodiac. The undivided musical note is do which contains within itself all musical possibilities.
75
Markoe: Musical Correlations
THE LAW OF ONE A being—either a person, a planet, an animal, an atom, a flower, or any entity. It contains within itself many parts. It is greater than the sum of its parts. The white ray—it contains within itself all other colors. God—containing within itself everything. The musical note do—it contains within itself all other notes. It is everything in that it contains within it all musical possibilities. 0° and 360°—conjunction, A—the beginning and the end, Alpha and Omega.
THE LAW OF TWO Night & day Light & dark Positive & negative Electrical current having a positive & a ground Mirror reflection Balance Yin & Yang Inner world, outer world Two halves of the human body (right side & left side) Polar opposites Paired organs (two eyes, two ears, etc.) The musical note do2 (one octave higher than do) S, 180° opposition
THE COSMIC LAW OF THREE—TRINITY Three Primary Colors Tripod Triangle Pyramid Three parts of the human body (legs and pelvis, torso & arms & head) Each body part divided into three (upper arm, forearm, hand; three sections to each finger) Three functions—thinking, moving, and feeling The musical notes sol1 (at 2/3) and sol2 (at 1/3) A musical triad Cardinal, Fixed, Mutable Trine, F, 120°
76
Considerations XX: 3
THE COSMIC LAW OF FOUR—QUADRUPLICITY Soprano, Alto, Tenor, Bass Four seasons (2 solstice & 2 equinox points) Fire, Earth, Air, & Water Four Cardinal, Four Fixed, Four Mutable signs The musical notes fa1 (at 3 / 4 ), do 2 (at 1 / 2 ) and do 3 (at 1 / 4 ) 90°, D Structural strength buildings, rooms, etc. Four legged animals Two arms and two legs A musical tetrachord
THE COSMIC LAW OF FIVE
Five pointed star Five petaled flowers and leaf groupings A human (head, two arms, and two legs) Five fingers on each hand & five toes on each foot Five apertures in the face (two eyes, two nostrils, one mouth) The Pentatonic scale (Chinese and Japanese) The musical notes mi1 (at 4/5), la1 (at 3/5), mi1 (at 2/5), mi3 (at 1/5) Astrological aspect of a Quintile 72°
77
Markoe: Musical Correlations
THE COSMIC LAW OF SIX
This is a doubling of the Law of 3 so is also influenced by the number 2. Two arms with three sections each (upper arm, forearm, and hand) Two legs with three sections each (thigh, calf, and foot) Six-pointed flowers and leaf formations The musical notes mib1 (at 5/6), sol1 (at 4/6), do2 (at 3/6), sol2 (at 2/6), and sol3 (at 1/6) Astrological aspect of a sextile 60째, G
THE COSMIC LAW OF 7
Seven Colors Seven Planets (in ancient times) Two arms, two legs, head, pelvis, and torso Seven-pointed flowers and leaf formations The seven tone musical scale (do, re, mi, fa, sol, la, ti) The musical notes mib1 (at 6/7), fab1 (at 5/7), tib1 (at 4/7), mib 2 (at 3/7), tib2 (at 2/7), and tib3 (at 1/7) Astrological aspect of a septile 51.2857째
78
The Eighth House JOHN FRAWLEY
O
NCE UPON A TIME, when someone had their horoscope cast the first thing the astrologer would determine would be the length of the person's life. This was considered an obvious preliminary, marking out the limits of the investigation. For there was no point whatever in the astrologer laboring to predict what would happen to the person on Wednesday if the chart showed that he was more than likely to die on Tuesday. Things have changed, for predicting the length of life is now regarded as anathema. Many text-books of astrology state in no uncertain terms that it is one thing that no astrologer should ever do. Yet the awareness of its necessity still remains, for the astrologer will often be asked, “OK: you've said such and such will happen; but what if I get hit by a bus tomorrow?'” We may leave to one side the question of whether being hit by a bus is really as common a cause of fatality as such questioners seem to believe; but it is only reasonable to think that—making full allowance for the astrologer's human fallibility—if the client were to come to a sticky end in the immediate future, it could and should be read in the chart. Provided the astrologer is willing and able to do it. We do hear appalling stories of irresponsible stargazers carelessly predicting ghastly ends with no apparent thought for the consequence of their words. $ and “ seem to be the favored pegs on which they hang their dire prognostications, although neither of these has much connection with death in the chart. Obviously, we must be cautious about what is said and to whom we say it. In more spiritual ages our ancestors knew that life makes sense only if lived in the full awareness of death: momento mori. To the modern eye, this appears gruesome. Far from it, as it heightens the capacity to live. Today, death is something that we much prefer to forget about, except when it happens to the bad guys on TV. Forget it as we may, it will still catch up with all of us, and it is an event of some consequence in the life. It is often claimed that advances in medical knowledge (a somewhat loaded phrase, this!) and extended life spans mean that death cannot be predicted by astrology. This is both demonstrably untrue and theoretically unsound. If such were the case, there must come a point in the life at which we are no longer subject to the stars. We might then all look forward to an old age where we are married to Mel Gibson or Nicole Kidman and win the lottery every week. Disappointing though it might be, this is unlikely to happen.
79
Frawley: The Eighth House
The debate about the prediction of death has created some bizarre ideas about the nature of the 8th house of the chart. As death has apparently been abolished, it has become necessary to find something else for the 8th to do. Favorite options today are `transformative experiences', whatever they might be, and sex. The idea of sex as an 8th-house activity is quite horrific. In any astrology that purports to say anything of concrete and verifiable accuracy, the 8th is the house of death. This is not death in any poetic or metaphorical sense, as some modern authorities claim. This is death in the very real sense of someone no longer being alive. There are no prizes at all for predicting that somebody will eventually die: the important part of this prediction is getting the timing right. This is a matter of some importance, as if my astrologer has convinced me that I will die tonight, so I spend my last penny on an afternoon of hedonistic glee, I may not be best pleased when I wake tomorrow to find that the prediction was wrong. Much nonsense is written about the timing of death from the chart. When Princess Diana died, for example, several published articles pinned the blame on progressed “ being in her 8th house. As at any one time a twelfth of the population has progressed “ in their 8th house, and they do not all drop dead, this has limited validity as a predictive technique. We must indeed look at progressions, but we need tools far more precise than the infinitesimally slow meandering of progressed Pluto through the zodiac. Contacts with the 8th cusp and the ruler of the 8th house can be important, but even here we must exercise caution. Especially today, when we can flick up a progressed chart at the touch of a mouse, it is easy to forget that the progressions at any one moment are part of a system of on-going cycles, not a separate entity in themselves. So if we see the progressed w crossing our natal 8th cusp we should not panic, but remember that the progressed w circuits the chart every twenty-eight years, so in the average life-time it will do this two or even three times, usually without any ill effect. The fixed stars assume a great significance whenever we consider the major turning points of the life, so progressions onto the more malign of them need to be considered. Malign, that is, from our own perspective, as the significance of the w's nodes makes clear. At the end of his Republic, Plato gives a beautiful piece of astrological symbolism as he tackles the most fundamental issues of life and fate. He sees our life as a wheel revolving around a spindle. This is different to our common perception, which is of a straight line starting when we were born and moving inexorably to our death. The spindle around which our life is strung is the axis of the lunar nodes. From our perception it is `l good; L bad', as the l is the doorway into life, through which we come ‘trailing clouds of glory’. The restrictive L,
80
Considerations XX: 3
traditionally likened in its effects to u, is the strait gate and narrow way through which we pass out of life. But, as the great teachers have ever told us, our perception is upside down, conditioned as it is by our viewpoint within life. It is significant in this context that we speak of the ‘pearly gates’ of heaven, as the lunar symbolism of the pearl brings us back to the w and her nodes, reminding us exactly of what we are talking. From the progressions and the Solar and Lunar return charts the 8th house will show us the timing of death; it will also show us its quality: whether it will be sudden or long drawn; from illness or accident, or whatever. As an extension from the idea of death, the 8th house also shows legacies. The major significators of wealth in the 8th house, William Lilly tells us, show ‘profit from dead folks’—and he was well placed to know, with his happy knack of marrying rich widows shortly before they died! The 8th is not money only from the dead, however, for as the 2nd house from the 7th it shows the money of whomever the 7th house represents. If the 7th house is my wife, the 8th will show her money. This was a major topic of interest in Lilly's day, where much of the astrologer's practice was devoted to questions of “How much money does my prospective spouse have—and how easily can I get my hands on it?” This is not a matter that is entirely lost in the wastes of time. While wives now may not commonly bring dowries with them, we do often see in horary questions that there is a puzzling lack of reception between man and woman. “What does he/she see in her/him?” we wonder. Until we notice that there is a strong reception between our querent and the ruler of the 8th house. “Aha!” we think. `He may not like her much, but he does like her money'. We can then consider the ruler of her 8th to see if she really has any money, or whether his interest is just wishful thinking. As we saw with the 2nd house, there is a deeper side to this. While the 2nd is on a superficial level my possessions, and at this deeper level my self-esteem, so the 8th is the other person's possessions, and also his esteem for me. So often in horaries the 8th-house concern is less a desire for the other person's cash than the emotional necessity of their thinking well of me. It is not only partners, but also opponents, and even ‘any old person’ that is shown by the 7th. So the 8th house is also my enemy's money. This is pertinent in horary questions of profit. “Will I win by backing Red Rum in the 3.30?” What I want to see here is a nice aspect bringing the ruler of the 8th house—the bookie's money—to the ruler of either the 1st house (me) or the 2nd house (my pocket). I hope to see the ruler of the 8th house strongly dignified and well placed in the chart: I want the bookie's money in the best possible condition. That is, I want a lot of it. Its condition and the nature of the aspect will tell me how much I am likely to win. If there is a good aspect, but the ruler of the 8th is in poor condition, I may win, but I will not win much. This can help us make
81
Frawley: The Eighth House
decisions. For instance, we may have the choice of a safe investment with a low return, or a higher return at greater risk. If the chart shows a big win, we may decide to take the risk; if a win but only a small one, we would take the safer option. In a business context the other people are our customers, so ‘the other people's money’ is our takings. Again, we want to see the ruler of the 8th house in good condition—but there is an important rider here. If the ruler of the 8th house is in the 8th it will usually be very strong, as it will usually be in its own sign. But the ruler of the 8th in the 8th is a sure indication that, no matter how much money our customers might have, it is staying right in their pockets. This is all the more true if the planet is in a fixed sign. On a more general level, the 8th is ‘anyone else's money’. So, for instance, in vocational matters we commonly find the key significators in the 8th house when the person is an accountant or in a similar profession whose dealings are with ‘anyone else's money’. e in the 8th house might almost be regarded as an astrological signature for accountancy, if e in that chart has significance for the profession. Through its associations with death, the 8th can also show ‘fear and anguish of mind’. By this is not meant a specific fear, such as a phobia; but if in a horary chart the querent's significator is in the 8th house we would judge that he is seriously worried about the situation. As a general rule, the 8th is not a favorable place for a planet to be. Although it is a succedent house and succedent houses are stronger than cadent, it is, as it were, an honorary cadent house. Planets in the 6th, 8th and 12th houses are significantly weakened. In medical matters the 8th shows the organs of excretion. It is the opposite to the 2nd house, which governs the throat, so the 2nd shows what goes into the body, while the 8th shows what comes out. If there is a fixed sign on this cusp in a medical chart, and if the cusp is afflicted by either a planet in a fixed sign or by u, which governs the body's retentive faculty, we might expect constipation, on either a physical or a psychic level. In a mutable sign and afflicted by a badly placed y, we might expect—again on either a physical or psychic level, as shown by other indicators— diarrhea. The chart echoes the connection between money and eating and excreting that Dante shows in his Inferno. The planet of the 8th house is u. Following the Chaldean order of the planets around the chart from u in the 1st, we come again to u in the 8th. u is the ruler of boundaries and of doors. As it shows us the doorway into life, it shows us also the doorway out. As it showed us the strait way in—we might remember what a hard passage is the birth—it shows us also the strait way out. From birth to death our life is bounded by u—and our way out of these bounds is through y, the planet of faith, the builder of the rainbow bridge to the Divine.
82
Where Are My Large Scissors? RUTH BAKER
M
DTAstrol, QHP, CMA
Y LARGE, lethally sharp scissors had vanished and I just could not remember what I had done with them. As I am one of those foolhardy people who cut their own fast-growing hair, this was something of an inconvenience.
r hour, u day w from void to G u D r In the chart I am represented by the Ascendant ruler, t, and by the w. The scissors as my property, are represented by the 2nd house ruler, y. The w's position in the 7th shows that I am thinking that my husband had taken them from their usual hiding place in my desk drawer and forgotten to replace them—a suggestion which he hotly denied. Both y and t are angular in the 10th, which indicates that the scissors are in my house1 and the separating conjunction of t and y reflects the parting between me and the scissors. Lilly says that the ascendant ruler 1
William Lilly, Christian Astrology p. 202. 83
Baker: Where are My Large Scissors?
separating from y shows that the goods are lost by 'excess of care of governing of things, or managing the affairs of the house.2 How very true! The w, applying to her dispositor, in this case r, shows that the scissors will be found,3 although the square aspect could cause some difficulty or delay. The significator of the scissors, y, and the Ascendant ruler, t, are in the same sign, also showing that the scissors are in the house 'where himselfe layeth up his own commodities, or such things as he most delights in'.4 y and t in the 10th point to the ordinary common room of the house. But whereabouts?
q w e r t y u ^
Sign r r r q r r w w
Exalt u w u u u y y
Trip u r u q u u t t
Term r u r t u u r e
Face w u w t w w w e
Fall Peregrine Peregrine Peregrine Detriment Peregrine Detriment
q MR r, y MR u Both y and t are in the air sign of z, indicating the upper part of the house. Both planets have just changed signs, which means that the place is near the joining of rooms5. z is a western sign. My study upstairs is to the west, but a thorough search there revealed nothing. I was puzzled by this as the chart seemed so clear. It was only after 10 days of spasmodic searching that I suddenly remembered the phrase, 'Where he layeth up his own commodities.' As well as pointing to an upper room, z also indicates one room within another.6 And then it hit me! Our large square landing lies to the west—a room between rooms! On this landing is a stool which is crammed with all kinds of my accumulated treasures… 'where he (or in this case 'she') layeth up his own commodities, or such things as he most delights in'. I opened the lid of the stool, and there were the scissors. I must have put them there in a moment of madness. Which just goes to show that it is not astrology which is in error—it is the astrologer! Mea culpa.
2
CA, p. 321. CA, p. 323. 4 CA, p. 202 5 CA, p. 203 6 CA, p. 96 3
84
On Astrologers
T
BERTRAND RUSSELL
HERE ‘mathematicized, and their horoscopes, instead of being inscribed cabalistically upon parchment, are neatly typed upon the best quarto typing paper. In this, they commit an error of judgment which makes it difficult to have faith in their power of deciphering the future in the stars. Do they believe themselves in the sciences that they profess? This is a difficult question. Everything marvelous honorably convinced of the truth of their doctrines….—Written 28 September 1932, included in the eminent
philosopher’s Why I am not a Christian (1957).
85
Lets’ Consider Ronald Laurence Byrnes writes: I would like to offer an alternative prediction for the outcome of the Michael Jackson trial: the jury will hang, Michael will walk. Ken Gillman's assessment (Considerations XX, No. 2) of the trial chart seems to me to contradict itself: on the one hand, he points to the aspects which the w (ruling the nadir or outcome) makes: first comes a trine to exalted r-rulerAscendant in the 11th house: this is the plausibility of the prosecution's case in the jury's eyes (remember, the w also rules "the people", too, as if this were the case of "The People vs. Michael Jackson"). But this aspect is followed by a succession of aspects ending with a sextile to t (the significator of Michael Jackson), a trine to debilitated u and finally a trine to a severely debilitated e (evidence and testimony), all of which work against the prosecution. Yet Mr. Gillman ignores these last three aspects in his summation and says that the w F r means that Michael will be convicted. I think that with the w itself in fall, in a cadent house, this is hardly a likely scenario. It might be well to veer from the traditional approach momentarily and consider two bodies that have much to say in matters of jurisprudence: y and u. In the trial chart, these bodies are in mutual reception by exaltation, but in reality this means that u is in detriment. If u, ruling the Midheaven, can be taken as the verdict, then its retrograde disposition in a cadent House, D y retrograde, cadent and intercepted in the equivocal sign of z, certainly sounds like a hung jury to me. In addition, if the 11th house bodies in n indicate the nature of the jury's decision-making, they are here traditionally disposed of by y, which gets us right back to the indecisiveness of y's cadent interception, retrograde in z. Then there is the nebulousness of that o in b at the 11th cusp, introducing suspicion in the jury's minds as to the truth and motive of much of the testimony that the prosecution is relying upon. Virginia Reyer, in her article on Michael Jackson's chart (preceding Ken Gillman's article) brings in his lunar returns for March 9 and April 6, 2005, saying that ultimately they indicate restraints to his freedom. Well, of course: he is under subpoena, forced to attend these trial sessions. But the lunar return for May 3, 2005 says something quite different. Bear in mind that the meanings of the Houses change now: the ascendant no longer refers to the bringer of charges, as in the trial chart; the ascendant is now the native whose lunar return we are examining, and the descendant is the prosecution. This descendant is ruled (traditionally) by a debilitated u in the 11th house—the jury: the Sword of Justice seems to have been blunted. In the 7th house, of Michael's adversaries, are the planets of upset—t and i—in the sign of disarray, n. i is in mutual reception with o, further indicating disarray. Just inside the 8th cusp is the raison d'être of this chart—Michael's w at its lunar return. Ruling cusp 12, this is the fate of Michael Jackson, out of his hands now—and what kinds of aspects does it receive? It is trine to that poor u in 11th—the jury—and the w and u are each sextile to an elevated, dignified r ruling the Midheaven (the verdict). The w's dispositor (traditional) is y whose dispositor, in turn, is this r, the final dispositor of the chart, conjunct Michael's own t in 12th, ruler of his 11th house interception: the jury of his peers—his final salvation. I would not go so far as to say that this means acquittal; no, r is
86
Considerations XX: 3 in loose square to o: there is enough suspicion in both directions to keep things from being clear-cut; but a hung jury—absolutely.
If the non-traditional rulerships are employed, the dispositor of the n stellium—representing Michael's adversaries and his tribulations—is o in b: that non-plussed jury again; and in the trial chart, o again disposes of the n stellium— in 11th this time: again, the jury, not knowing whom to believe. Another way of looking at the outcome of Michael's tribulations is to note that the w here is exactly trine the 4th cusp, of outcome. Now, one might say that since the w rules cusp 12—confinement—and is in n, this must mean jail. But the sign on the 4th cusp, x, refers to the 8th house, which contains Michael's w— the persona, the public figure under intense scrutiny and the control of others (8th house). This brings us back to the w's aspects—principally, the sextile to elevated and dignified r—10th ruler (and dispositor of the q which rules the ascendant): Michael is already in confinement, amid a flurry of innuendo (e/o = w), but the in-decision of the jury ultimately liberates him: q/r = k; w/u = r; —received May 5, 2005 q/r in s referring to the 2nd house y in z.
On 13th June 2005 (at approx. 2:15 pm) the jury found Jackson to be not guilty of any of the accusations made against him.
87
Let’s Consider Dymock Brose writes: A striking illustration of the effectiveness of Age Harmonics1 is provided by the Not Guilty verdict for entertainer Michael Jackson announced in Santa Maria, California, on 13 June 2005 at 2:15 pm. With the aid of the wonderful program of Age Harmonics Progressions in Solar Fire 6, we can pinpoint the exact date of an event without any complicated mathematical procedures.
There is no question but that Jackson would be acquitted of the charges against him: Benefic y links the Ascendant and Midheaven and is conjunct the w (good fortune); the w closely trines ^ in the House of Status; the “floating” MC is trined by both the w and y. This is, in fact, a defining moment for Jackson (Vertex on the Horizon). The q on the 5th cusp (Entertainments) sextile the Vertex means it is a moment of glory for him and he will ascend to new heights of popularity. Jackson’s adoring fans (r in the 11th house) are ever constant (u sextile). The w A y in the House of Earnings, with y ruling the House of Career and the w ruling the House of Entertainments, plus multiple connections to the bevy of significators in the 10th house assure him of continued bounty from the deployment of his undoubted talents. —Rostrevor, South Australia 1
Editor’s note: To obtain an individual’s age harmonics, each natal position, including the angles, is multiplied by the individual’s age at the time of the event. For example, Jackson’s natal q at 6º 19’ 05” h (or 156.3180556º) is multiplied by his age on 13th June 2005, namely 46.79087, to give 7314.25782º, which is the equivalent of 114.25782 (or 24º 15’ f) after 20 complete circles of 360º are removed. 88
Books Considered Skymates II: The Composite Chart by Steven and Jodie Forrest, 2005 Seven Paws Press, POB 2345, Chapel Hill, NC 27515 445 pages, paper. $25.95
S
TEVEN & JODIE Forrest are masters at simplifying and illuminating astrological jargon, making it accessible to all levels of astrologers. In their new book, Skymates II: The Composite Chart, the Forrests use their combined expertise to explore composite charts. This book complements their other book on relationship astrology, Skymates: Love, Sex and Evolutionary Astrology, which explores individuals’ relationship needs and analyzes the synastry symbolized in birthcharts. Taken together, these two books give astrologers superb reference tools for understanding relationship astrology. The Forrests consider composite chart analysis a very powerful technique for adding insight into relationship dynamics, symbolizing an “invisible third party” in a relationship. They write, “The invisible third party in every partnership may be impossible to see, but it reveals its nature and its hidden agenda very clearly in the composite [chart].” The book begins with an excellent overview of how to understand and work with composite charts, and then the lion’s share of the book is taken up with an extensive “cookbook” analysis of composite planets in signs and houses. The meaning of Lunar Nodes in composite charts is also explored and interpreted. The Forrests write that Lunar Nodes can help a couple “understand their souls’ history together” and help a couple “find their karmic story.” Clear, easy to understand explanations are the Forrests’ hallmark. Though their books are easy to read, there is no sacrifice of depth. Jodie and Steven Forrest, both well-known astrologers, are also skilled fiction writers and are able to integrate colorful imagery with solid astrology. Using sample charts to examine the kinds of information that can be gained by composites, the Forrests demonstrate the value of this astrological technique. The Forrests write “A birthchart and a composite chart both describe the same things: an optimal evolutionary path, the tools we have for the job, and what it looks like if we blow it.” Bound to be a popular text with novice astrologers who will most likely want to look up composite interpretations for all their relationships, this book is suitable for astrologers of all levels. Seasoned astrologers are well aware of the limitations of “cookbook astrology,” but they’ll still find the book’s insightful and imaginative interpretations use-
89
Books Considered
ful and enjoyable to read. Skymates II: The Composite Chart deserves a space on every astrologer’s bookshelf—right next to Skymates: Love, Sex and Evolutionary Astrology. —Leda Blumberg
The Astrology of Midlife and Aging by Erin Sullivan, 2005 Penguin Group, Inc., 375 Hudson St., New York, NY 10014 232 pages, paper. $15.95
A
T MIDLIFE everyone experiences a series of significant transits in which the outer planets make aspects to their natal positions. These transits symbolize the challenges and the psychological development being illuminated at that time. This midlife period, and the years that follow it, are the subjects of Erin Sullivan’s brilliant new book The Astrology of Midlife and Aging. Sullivan, a world-renowned astrologer, deftly blends her extensive knowledge of astrology, psychology and mythology to explain how major transits “reveal the passages and junctures of a lifetime.” Readers familiar with Sullivan know that her insights are deep and her interpretations ring true. Sullivan writes, “At midlife, the ‘system’ that one embodies undergoes radical chaos, which is a precreative time of collapsing the rigid structures of one’s extant system, while still retaining all attributes thereof. That is, we are still who we are, but not as well organized and contained as before the Uranus opposite Uranus transit.” The Uranus opposite Uranus transit marks the beginning of a long series of transits that denotes the midlife period from approximately ages 37 to 60. Sullivan refers to the half-Uranus cycle as the gateway to midlife’s transition. “The half-Uranus,” she writes, “is a time of awakening to the unlived life and the time to assess your assets and liabilities objectively.” “So a ‘midlife crisis’ is about you meeting you in a new circumstance” writes Sullivan. “Depending on your personal life and circumstances, the cultural ethos, and the generational intent, there are many generic aspects that are synchronous with the entry into midlife. These aspects, from each planet from Saturn outward, define, demarcate, and offer meaning to the rites of passage that support our evolving lifechanges throughout the twenty years of the midlife experience.” Understanding the significance of outer planet transits during the midlife transition can help one age consciously and aid in the journey towards wholeness. “The turning points in life are cyclic and ritualistic,” writes Sullivan. “If we treat them respectfully, and make the most of the times in which we find ourselves, then we are evolving.”
90
Considerations XX: 3
Sullivan writes with clarity as she reveals the potential meanings of the various transits that everyone experiences in midlife and the period afterwards, a time that Sullivan gracefully calls “full adult maturity.” Her book explores how the outer planets reflect one’s inner experiences. There is much great insight to be gained from these pages. The Astrology of Midlife and Aging contains some fascinating illustrations, for example, a diagram representing ‘the big picture’ of past and future, and a diagram of the development of the individual in the horoscope. Also included are tables of aspects for the different stages in life. The book ends with a wonderful, comprehensive list of recommended reading and resource materials that complement the material within. From my perspective, any book that Erin Sullivan has written should be read by astrologers who are intrigued by the great depth of understanding that can be gained when astrology is united with psychology. Like Sullivan’s other books, The Astrology of Midlife and Aging is an important contribution to the astrological literature. Read this book, then reread it as you experience the time periods discussed within. —Leda Blumberg
The Horary Textbook
I
by John Frawley, 2005 Apprentice Books, 85 Steeds Rd, London N10 1JB, England 270 pages, paper. £22, USA: $40
N AN IDEAL WORLD a neophyte astrologer would have commenced his study of our ancient craft with a thorough investigation of Horary before being allowed to come to grips with the more complex areas of natal, electional, medical, financial and mundane astrology. One of the reasons that this does not happen—in part an explanation for the abysmal reputation of much of today’s astrology—has been the absence of a clear description, in modern language, of how to go about reading a horary chart. We have William Lilly’s magnificent Christian Astrology but that was written fully 350 years ago and seems difficult for modern students to comprehend. None of the several books that have appeared in more recent times attempting to explain the how-to of horary have fully fulfilled their stated intent; most fail to achieve the clarity of Lilly’s out-dated 17th century style. Over recent years John Frawley has achieved a reputation for being an extremely competent horary astrologer. This is based on his successful predictions on TV, the word-of-mouth recommendations of his clients and the noted success of his many students. John publishes a wellreceived magazine, The Astrologers’ Apprentice, and has previously written two books, the award-winning The Real Astrology and The Real
91
Books Considered
Astrology Applied, which together with his many articles, some of which have appeared in Considerations, demonstrate that he possesses a gift for presenting complex ideas in a simple and clear fashion. John gave himself the task of writing “a clear and comprehensive guide to the craft of horary astrology.” With The Horary Textbook he has fully achieved this stated intent. The book is divided into two main parts, plus four appendices and an index. The first part presents clear explanations of the basic techniques used in horary. These include the obvious, namely fourteen chapters that cover the Houses, Planets, Signs, Essential and Accidental Dignities, Receptions, Aspects, Antiscia, Fixed Stars, Arabian Parts, Timing, and What is the Question and Who is Asking it? I say these are the obvious items to be included in such a book, and so they are, but there is much in how these basics are explained and discussed that will be helpful to even the most experienced horary practitioners. Indeed, some of the statements may prompt some readers to rethink how they interpret a horary. For example, opening the book at random and selecting a couple of such statements from consecutive pages: It is the condition of the planets and their attitudes to each other—the dignities and receptions—that show whether an aspect is fortunate. Not the nature of the aspect itself. (p. 89) Aspects cannot be formed if planets are in the wrong sign. (p. 90) It doesn’t matter which planet applies to which. (p. 91) The second part of The Horary Textbook is the true meat of the book. Here we learn how these tools are used to provide answers relating to each of the twelve houses, which amounts to virtually any question that anyone could ever ask, plus brief chapters on The Weather and Electing with Horary. Extensive use is made of receptions throughout the book, something that few previous authors including Lilly have fully explained. Understanding this concept will enable the horary astrologer to answer questions concerning the intent and feelings of one person for another, something I never found in Lilly. The Horary Textbook is not a hefty tome but it contains fully everything—and more—that is valuable in Christian Astrology and does so in sharp, crisp modern-day language. John Frawley has written a text that is all a would-be horary astrologer needs to master this so-important cornerstone of our craft. A well-thumbed copy should be an ever-present on the desk of all astrologers. —Ken Gillman
92
Considerations XX: 3
Who ? Ruth Baker, a regular and most welcome contributor to Considerations on horary matters, is a professional violinist. She lives on the Essex coast in England Leda Blumberg is an author, astrologer and avid horsewoman who lives on a beautiful farm in New York state. You can reach Leda at Ledabc@optonline.net. After producing some 280 monthly issues, Dymock Brose recently closed the Astrologers’ Forum, the highly renowned journal he published and edited for 23+ years. Considerations hopes to include his writings on those occasions that old habits die hard. Dymock lives in Rostrevor, South Australia.. Gerald Jay Markoe is a best-selling new age recording artist with several recordings at the top of the music charts; he is also a lifelong student of meditation, yoga and astrology. A classically trained musician who studied at the Julliard and the Manhattan School of Music, he custom-composes music based on the patterns in an individual’s birth chart. Gerald splits his time between homes in NYC and Puerto Rico, and can be contacted at GMarkoe1@aol.com. Bill Meridian, who lives in Vienna, Austria, designed the AstroAnalyst and Financial Trader financial astrology software programs. His best-selling book Planetary Stock Trading is now in its extensively revised second edition. Margaret Millard (1916-2004). A medical doctor who brought much needed experience, insight and common sense to astrology. Virginia A. Reyer can communicate six languages. She is a Switzerland-born astrologer with many years experience, who is currently researching the Vertex. Virginia now lives in Glendale, Arizona. Bertrand Russell (1872-1970), one of the twentieth century’s greatest philosophers and among its most complex and controversial figures, was the author of innumerable books as well as being the founder of Ban the Bomb and other effective peace movements.
93
Considerations Magazine
|
https://issuu.com/considerations/docs/20-3
|
CC-MAIN-2020-45
|
en
|
refinedweb
|
Learn about Google’s new service launched recently – Google Cloud Run – and how it allows you to run server-side swift server-less 😱. In part 1 we will cover the basics and deploy a Vapor application in less than 10 minutes.
Part 1: Deploying a Server-side Swift application
Serverless
Serverless is there to make our developer’s life easier and costs more manageable. By offering us infrastructure and platforms as a service on a per-use basis we don’t have to deal with configuring or monitoring servers as well as thinking too much about scaling. Often these services are offered by cloud providers as FaaS (Function As A Service), e.g. AWS Lambda, Azure Cloud Functions etc.
Apart from the usual drawbacks when going for a serverless architecture (see more below) you are often limited to a small amount of supported programming languages such as python, ruby, nodejs and go (IBM supports serverside swift functions).
Google Cloud Run
Well, Google introduced a new solution Google Cloud Run that provides you with a fully managed serverless execution environment for containerized apps. This means you can run any container — with the language of your choice — on their platform.
Google Cloud Run Overview
As of writing this you can try out Google Cloud Platform for free, getting $300 free credit to spend over 12 months.
Running Server-side Swift on Google Cloud Run
I’m choosing Vapor as my weapon of choice when it comes to server-side Swift, but this should also work with alternative frameworks such as Kitura or Perfect.
The following steps are required:
- Setup an account on Google Cloud platform (not covered in this tutorial)
- Create a project on Google Cloud platform
- Install Google Cloud toolbox (CLI)
- Setup your Vapor project
- Create and configure the Dockerfile
- Build your project and push the image to Google Cloud Container Registry
- Deploy the image to Google Cloud Run
Okay, let’s get our hands dirty! 🙌
Create project on GCR (or use an existing one)
For testing out things it is always nice to create a new project that you can easily delete later (and with it, all its resources).
- Go to
- Enter a name for your project, create the project
- Remember the project id, e.g.
gcr-blog-example
Install Google Cloud toolbox
I’m assuming you are on a Mac and have homebrew installed.
Install the Google Cloud SDK command line interface:
$ brew install homebrew/cask/google-cloud-sdk
Initialize
gcloud, providing access to your Google Cloud account and select your project:
$ gcloud init
Install the beta components needed for Google Cloud Run:
$ gcloud components install beta $ gcloud components update
Setup your Vapor project
I’m assuming you are already having the Vapor toolbox installed. If not, please follow the instructions here (macOS).
Note: you can get the source code on Github:
Create a new Vapor project (using the default api template):
$ vapor new gcr-example-vapor
Generate the Xcode project and open it:
$ vapor xcode -y
Let’s collect some information about the container we’re currently on. Change your
boot.swiftto:
import Vapor internal struct ContainerInfo: Content { static let date: Date = Date() static let containerID: Int = Int.random(in: 0...Int.max) let creationDate = ContainerInfo.date let currentDate = Date() let uuid = ContainerInfo.containerID } /// Called after your application has initialized. public func boot(\_ app: Application) throws { // Your code here _ = ContainerInfo() }
In the bottom of
routes.swiftadd this route:
router.get("gcr-example") { req in return ContainerInfo() }
Run the project locally (CMD+R) — make sure to select the
Runtarget.
Try our new endpoint! In terminal enter:
$ curl localhost:8080/gcr-example
Result:
{ "currentDate": "2019-05-22T04:26:49Z", "uuid": "1939D9E0-A67A-412D-9C4F-55722EFA1751", "creationDate": "2019-05-22T04:17:18Z" }
⚠️ Note that the default template from Vapor uses SQLite as Fluent driver which doesn’t make a lot of sense when working with horizontally scaled systems. If you need to persist your data (across nodes) you should use a different solution, e.g. Google Cloud SQL, or any separate database server.
Create and configure the Dockerfile
The default Vapor template already includes a Dockerfile 🎉. Let’s take this one as a basis and adjust to fit our (Google’s) needs:
- Create a copy of the Dockerfile shipped with the example:
$ cp web.Dockerfile Dockerfile
- Change the
ENVin line 30 to
development
Google Cloud Run expects containers to listen on port
8080, therefore change the the port from
80to
8080in line 32
# You can set the Swift version to what you need for your app. Versions can be found here: FROM swift:5.0 as builder # For local build, add `--build-arg env=docker` # In your application, you can use `Environment.custom(name: "docker")` to check if you're in this env ARG env RUN apt-get -qq update && apt-get install -y \ libssl-dev zlib1g-dev \ && rm -r /var/lib/apt/lists/* WORKDIR /app COPY . . RUN mkdir -p /build/lib && cp -R /usr/lib/swift/linux/*.so* /build/lib RUN swift build -c release && mv `swift build -c release --show-bin-path` /build/bin # Production image FROM ubuntu:18.04 ARG env # DEBIAN_FRONTEND=noninteractive for automatic UTC configuration in tzdata RUN apt-get -qq update && DEBIAN_FRONTEND=noninteractive apt-get install -y \ libatomic1 libicu60 libxml2 libcurl4 libz-dev libbsd0 tzdata \ && rm -r /var/lib/apt/lists/* WORKDIR /app COPY --from=builder /build/bin/Run . COPY --from=builder /build/lib/* /usr/lib/ # Uncomment the next line if you need to load resources from the `Public` directory #COPY --from=builder /app/Public ./Public # Uncomment the next line if you are using Leaf #COPY --from=builder /app/Resources ./Resources ENV ENVIRONMENT=development ENTRYPOINT ./Run serve --env $ENVIRONMENT --hostname 0.0.0.0 --port 8080
(Optional) If you have Docker installed, you can test locally now by building and running it:
$ docker build -t gcr-example-vapor . $ docker run -it -p 8080:8080 gcr-example-vapor
This will make our endpoint available at
localhost:8080/gcr-example. You can skip this step if you don’t have Docker.
Build your image & deploy to Google Cloud Run
Now we are almost done! We are using the CLI tool
gcloud that we installed earlier to build and upload our image to Google Cloud Registry. Then we will deploy it using Google Cloud Run.
Submit the build:
$ gcloud builds submit --tag gcr.io/gcr-blog-example/gcr-example-vapor
Deploy and run it on Google Cloud Run:
$ gcloud beta run deploy --image gcr.io/gcr-blog-example/gcr-example-vapor
(replace
gcr-blog-examplewith your project id and
gcr-example-vaporwith a container name of your choice)
In the output you will see the URL, e.g.-{some-id}-uc.a.run.app.
We are done now, you can try it out with curl or in the browser of your choice:
Interesting: creationDate and uuid never change 🤔. That’s something for the next blog posts 🕵️♂️.
Up next
This was as easy as it should be when dealing with serverless applications. In less than 10 minutes we were able to build & configure a project and deploy it into the wild using Google Cloud Run.
In the next articles we will be focussing on advantages and disadvantages of this architecture, have a deep look into performance and create more elaborate examples.
Thanks for reading!
– Christian
Links
- Github repository with all sources:
- Nodes Engineering Blog:
- Vapor:
- Google Cloud Platform:
- Google Cloud Run:
|
https://engineering.monstar-lab.com/2019/05/27/serverless-serverside-swift-using-google-cloud-run
|
CC-MAIN-2020-45
|
en
|
refinedweb
|
Not sure why, finally just sent it through queues, but would like this to work.
Here is the source code for what does not work. I have patched a since wave to the I2S and can see that it works. I have also done notefrequency on the USB input and know that it works, and since I have used queues and that works it is just the AudioConnection.
#include <Audio.h>
#include <Wire.h>
#include <SPI.h>
#include <SD.h>
#include <SerialFlash.h>
// GUItool: begin automatically generated code
AudioInputUSB usb1; //xy=226,133
AudioOutputI2S i2s1; //xy=399,126
AudioConnection patchCord1(usb1, 0, i2s1, 0);
AudioConnection patchCord2(usb1, 1, i2s1, 1);
// GUItool: end automatically generated code
void setup() {
// put your setup code here, to run once:
AudioMemory(60);
}
void loop() {
// put your main code here, to run repeatedly:
}
|
https://forum.pjrc.com/threads/62561-USB-input-to-I2S-output-does-not-work-for-Teensy4-1?s=164c9147f72a828120bfaf56f688477e
|
CC-MAIN-2020-45
|
en
|
refinedweb
|
isblank -- space or tab character test
Standard C Library (libc, -lc)
#include <ctype.h>
int
isblank(int c);
The isblank() function tests for a space or tab character. For any
locale, this includes the following standard characters:
``\t'' `` ''
In the "C" locale isblank() successful test is limited to this characters
only. For single C chars locales (see multibyte(3)) the value of the
argument is representable as an unsigned char or the value of EOF.
Although isblank() accepts arguments outside of the range of the unsigned
char type in locales with large character sets, this is a 4.4BSD extension
and the iswblank() function should be used instead for maximum
portability.
The isblank() function returns zero if the character tests false and
returns non-zero if the character tests true.
ctype(3), iswblank(3), multibyte(3), ascii(7)
The isblank() function conforms to ISO/IEC 9899:1999 (``ISO C99'').
FreeBSD 5.2.1 October 6, 2002 FreeBSD 5.2.1
|
https://nixdoc.net/man-pages/FreeBSD/man3/isblank.3.html
|
CC-MAIN-2020-45
|
en
|
refinedweb
|
Yabeda
This software is Work in Progress: features will appear and disappear, API will be changed, your feedback is always welcome!
Extendable solution for easy setup of monitoring in your Ruby apps.
Read more about Yabeda and the reasoning behind it in Martian Chronicles: “Meet Yabeda: Modular framework for instrumenting Ruby applications”
Installation
Most of the time you don't need to add this gem to your Gemfile directly (unless you're only collecting your custom metrics):
gem 'yabeda' # Then add monitoring system adapter, e.g.: # gem 'yabeda-prometheus'
And then execute:
$ bundle
Usage
Declare your metrics:
Yabeda.configure do group :your_app do counter :bells_rang_count, comment: "Total number of bells being rang", tags: %i[bell_size] gauge :whistles_active, comment: "Number of whistles ready to whistle" histogram :whistle_runtime do comment "How long whistles are being active" unit :seconds end end end
After your application was initialized and all metrics was declared, you need to apply Yabeda configuration:
Yabeda.configure!
If you're using Ruby on Rails then it will be configured automatically!
Access metric in your app and use it!
def ring_the_bell(id) bell = Bell.find(id) bell.ring! Yabeda.your_app.bells_rang_count.increment({bell_size: bell.size}, by: 1) end def whistle! Yabeda.your_app.whistle_runtime.measure do # Run your code end end
Setup collecting of metrics that do not tied to specific events in you application. E.g.: reporting your app's current state
Yabeda.configure do # This block will be executed periodically few times in a minute # (by timer or external request depending on adapter you're using) # Keep it fast and simple! collect do your_app.whistles_active.set({}, Whistle.where(state: :active).count) end end
- Optionally setup default tags that will be added to all metrics ```ruby Yabeda.configure do default_tag :rails_environment, 'production' end
# You can redefine them for limited amount of time Yabeda.with_tags(rails_environment: 'staging') do Yabeda.your_app.bells_rang_count.increment(bell.size, by: 1) end
6. See the docs for the adapter you're using 7. Enjoy!
Available monitoring system adapters
- Prometheus
- Datadog
- NewRelic
- Statsd
- …and more! You can write your own adapter and open a pull request to add it into this list.
Roadmap (aka TODO or Help wanted)
- Ability to change metric settings for individual adapters
histogram :foo, comment: "say what?" do adapter :prometheus do buckets [0.01, 0.5, …, 60, 300, 3600] end end
- Ability to route some metrics only for given adapter:
adapter :prometheus do include_group :sidekiq.
|
https://www.rubydoc.info/gems/yabeda/0.8.0
|
CC-MAIN-2020-45
|
en
|
refinedweb
|
Description
Graphite client for Elixir with zero dependencies
Graphitex alternatives and similar packages
Based on the "Debugging" category
observer_cli9.6 1.7 Graphitex VS observer_cliVisualize Elixir & Erlang nodes on the command line, it aims to helpe developers debug production systems.
visualixir9.6 0.0 Graphitex VS visualixirA process visualizer for remote BEAM nodes.
Wobserver9.5 0.0 Graphitex VS WobserverWeb based metrics, monitoring, and observer
elixometer9.4 5.8 Graphitex VS elixometerA light Elixir wrapper around exometer.
exometer9.3 0.0 Graphitex VS exometerBasic measurement objects and probe behavior in Erlang.
eper9.3 0.0 Graphitex VS eperErlang performance and debugging tools.
eflame8.8 0.0 L3 Graphitex VS eflameFlame Graph profiler for Erlang.
ex_debug_toolbar8.6 0.0 Graphitex VS ex_debug_toolbarA toolbar for Phoenix projects to interactively debug code and display useful information about requests: logs, timelines, database queries etc.
beaker8.2 0.0 Graphitex VS beakerStatistics and Metrics library for Elixir.
dbg7.2 0.0 Graphitex VS dbgDistributed tracing for Elixir.
rexbug6.6 1.9 Graphitex VS rexbugAn Elixir wrapper for the redbug production-friendly Erlang tracing debugger.
exrun6.5 0.0 Graphitex VS exrunDistributed tracing for Elixir with rate limiting and simple macro-based interface.
erlang-metrics6.5 0.0 Graphitex VS erlang-metricsA generic interface to different metrics systems in Erlang.
quaff6.2 0.0 Graphitex VS quaffThe Debug module provides a simple helper interface for running Elixir code in the erlang graphical debugger.
GenMetrics6.2 0.0 Graphitex VS GenMetricsElixir GenServer and GenStage runtime metrics.
git_hooks4.7 6.3 Graphitex VS git_hooksAdd git hooks to Elixir projects.
booter3.9 0.0 Graphitex VS booterBoot an Elixir application, step by step.
eh3.4 0.0 Graphitex VS ehA tool to look up Elixir documentation from the command line.
ether1.0 0.0 Graphitex VS etherEther provides functionality to hook Elixir into the Erlang debugger. Graphitex or a related project?
Popular Comparisons
README
Graphitex
Graphite client for Elixir with zero dependencies
Installation
If available in Hex, the package can be installed as:
- Add
graphitexto your list of dependencies in
mix.exs:
def deps do [{:graphitex, "~> 0.1.0"}] end
- Ensure
graphitexis started before your application:
def application do [applications: [:graphitex]] end
- Set up configuration in
config.config.exs:
config :graphitex, host: '188.166.101.102', port: 2003
- API:
Graphitex.metric(4, "aws.cluster_one.avg_cpu") # or Graphitex.metric(4, ["aws", "cluster_one", "avg_cpu"]) # or Graphitex.metric(41.0, [:aws, :cluster_one, :avg_cpu])
by default
:os.system_time(:seconds) used as timestamp, but you can pass
ts as an argument
Graphitex.metric(41, "aws.cluster_one.avg_cpu",:os.system_time(:seconds))
likewise there is a shortcut
Graphitex.metric(41, "aws.cluster_one.avg_cpu", Graphitex.now)
Insert batch:
[{4, "client.transactions.east"}, {2, "client.transactions.west"}, {5, "client.transactions.north", Graphitex.now} ] |> Graphitex.metric_batch(batch)
*Note that all licence references and agreements mentioned in the Graphitex README section above are relevant to that project's source code only.
|
https://elixir.libhunt.com/graphitex-alternatives
|
CC-MAIN-2020-45
|
en
|
refinedweb
|
55191/python-typeerror-object-type-class-str-cannot-passed-to-code
I am trying to run the following code:
from Cryptodome.Cipher import AES
key = '0123456789abcdef'
IV='0123456789abcdef'
mode = AES.MODE_CBC
encryptor = AES.new(key, mode,IV=IV)
text = 'hello'
ciphertext = encryptor.encrypt(text)
But I am getting the following error:
Traceback (most recent call last):
File "F:/Python Scripts/sample.py", line 8, in <module>
ciphertext = encryptor.encrypt(text)
File "F:\Python Scripts\Crosspost\venv\lib\site-packages\Cryptodome\Cipher\_mode_cbc.py", line 178, in encrypt
c_uint8_ptr(plaintext),
File "F:\Python Scripts\Crosspost\venv\lib\site-packages\Cryptodome\Util\_raw_api.py", line 145, in c_uint8_ptr
raise TypeError("Object type %s cannot be passed to C code" % type(data))
TypeError: Object type <class 'str'> cannot be passed to C code
Hey,
Did you miss to encode the IV!
You can try something like this:
key = b'0123456789abcdef'
IV= b'0123456789abcdef' #need to be encoded too
mode = AES.MODE_CBC
encryptor = AES.new(key, mode,IV=IV)
text = b'hello'
ciphertext = encryptor.encrypt(text)
I hope this will work.
You have to pass byte string as values so try encoding the data values to byte and then pass it. Try this:
from Cryptodome.Cipher import AES
key = '0123456789abcdef'
IV='0123456789abcdef'
mode = AES.MODE_CBC
encryptor = AES.new(key.encode('utf8'), mode,IV=IV.encode('utf8'))
text = 'hello'
ciphertext = encryptor.encrypt(text)
A TypeError can occur if the type ...READ MORE
Yes, There is an alternative. You can ...READ MORE
Hey @Ashish, change the emotion_map to the ...READ MORE
Hey @Dipti
This is the list on ...READ MORE
You can also use the random library's ...READ MORE
Syntax :
list. count(value)
Code:
colors = ['red', 'green', ...READ MORE
can you give an example using a ...READ MORE
You can simply the built-in function in ...READ MORE
In fact, I get a perfectly good ...READ MORE
Refer to the below code:
class Try: ...READ MORE
OR
Already have an account? Sign in.
|
https://www.edureka.co/community/55191/python-typeerror-object-type-class-str-cannot-passed-to-code?show=55192
|
CC-MAIN-2020-45
|
en
|
refinedweb
|
MQTT topics
MQTT topics identify AWS IoT messages. AWS IoT clients identify the messages they publish by giving the messages topic names. Clients identify the messages to which they want to subscribe (receive) by registering a topic filter with AWS IoT Core. The message broker uses topic names and topic filters to route messages from publishing clients to subscribing clients.
The message broker uses topics to identify messages sent using MQTT and sent using HTTP to the HTTPS message URL.
While AWS IoT supports some reserved system topics, most MQTT topics are created and managed by you, the system designer. AWS IoT uses topics to identify messages received from publishing clients and select messages to send to subscribing clients, as described in the following sections. Before you create a topic namespace for your system, review the characteristics of MQTT topics to create the hierarchy of topic names that works best for your IoT system.
Topic names
Topic names and topic filters are UTF-8 encoded strings. They can represent a hierarchy of information by using the forward slash (/) character to separate the levels of the hierarchy. For example, this topic name could refer to a temperature sensor in room 1:
sensor/temperature/room1
In this example, there might also be other types of sensors in other rooms with topic names such as:
sensor/temperature/room2
sensor/humidity/room1
sensor/humidity/room2
As you consider topic names for the messages in your system, keep in mind:
Topic names and topic filters are case sensitive.
Topic names must not contain personally identifiable information.
Topic names that begin with a $ are reserved topics to be used only by AWS IoT Core.
AWS IoT Core can't send or receive messages between AWS accounts or Regions.
For more information on designing your topic names and namespace, see our
whitepaper, Designing MQTT Topics for AWS IoT Core
For examples of how apps can publish and subscribe to messages, start with Getting started with AWS IoT Core and AWS IoT Device and Mobile SDKs.
The topic namespace is limited to an AWS account and Region. For example, the
sensor/temp/room1 topic used by an AWS account in one Region is
distinct from the
sensor/temp/room1 topic used by the same AWS
account in another Region or used by any other AWS account in any Region.
Topic ARN
All topic ARNs (Amazon Resource Names) have the following form:
arn:aws:iot:
aws-region:
AWS-account-ID:topic/
Topic
For example,
arn:aws:iot:us-west-2:123EXAMPLE456:topic/application/topic/device/sensor
is an ARN for the topic
application/topic/device/sensor.
Topic filters
Subscribing clients register topic filters with the message broker to specify the message topics that the message broker should send to them. A topic filter can be a single topic name to subscribe to a single topic name or it can include wildcard characters to subscribe to multiple topic names at the same time.
Publishing clients can't use wildcard characters in the topic names they publish.
The following table lists the wildcard characters that can be used in a topic filter.
Using wildcards with the previous sensor topic name examples:
A subscription to
sensor/#receives messages published to
sensor/,
sensor/temperature,
sensor/temperature/room1, but not messages published to
Sensor.
A subscription to
sensor/+/room1receives messages published to
sensor/temperature/room1and
sensor/humidity/room1, but not messages sent to
sensor/temperature/room2or
sensor/humidity/room2.
Topic filter ARN
All topic filter ARNs (Amazon Resource Names) have the following form:
arn:aws:iot:
aws-region:
AWS-account-ID:topicfilter/
TopicFilter
For example,
arn:aws:iot:us-west-2:123EXAMPLE456:topic/application/topic/#/sensor
is an ARN for the topic filter
application/topic/#/sensor.
|
https://docs.aws.amazon.com/iot/latest/developerguide/topics.html
|
CC-MAIN-2020-45
|
en
|
refinedweb
|
#include <itkXMLFile.h>
template base class for an XMLReader It's purpose really is just to define the simple interface for extracting the object resulting from reading the XML File. Since it doesn't define any of the pure virtual methods in XMLReaderBase, It can't be instantiated by itself
Definition at line 100 of file itkXMLFile.h.
Definition at line 105 of file itkXMLFile.h.
Definition at line 125 of file itkXMLFile.h.
Get the output object, after an XML File has been successfully parsed.
Definition at line 119 of file itkXMLFile.h.
Set the output object. Doesn't make sense for a client of the XMLReader, but could be used in derived class to assign pointer to result object.
Definition at line 111 of file itkXMLFile.h.
Definition at line 131 of file itkXMLFile.h.
|
https://itk.org/Doxygen/html/classitk_1_1XMLReader.html
|
CC-MAIN-2020-45
|
en
|
refinedweb
|
Building A Search Page with Elasticsearch and .NET- II
Search index population
Elasticsearch is completely document-oriented and it stores entire documents in its index. But you need to to create a client to communicate with Elasticsearch.
var node = new Uri(“″);
var settings = new ConnectionSettings(node);
settings.DefaultIndex(“stackoverflow”);
var client = new ElasticClient(settings);
Here we create a class that represents our document.
public class Post
{
public string Id { get; set; }
public DateTime? CreationDate { get; set; }
public int? Score { get; set; }
public int? AnswerCount { get; set; }
public string Body { get; set; }
public string Title { get; set; }
[String(Index = FieldIndexOption.NotAnalyzed)]
public IEnumerable<string> Tags { get; set; }
[Completion]
public IEnumerable<string> Suggest { get; set; }
}
Elasticsearch is known to dynamically resolve the document type and its fields at index time,one can override field mappings or use features on fields in order to give more advanced usages. In the below example we decorated our POCO class with some features so we need to develop mappings with AutoMap.
var indexDescriptor = new CreateIndexDescriptor(stackoverflow)
.Mappings(ms => ms
.Map<Post>(m => m.AutoMap()));
Then, we can develop index called and put the mappings.
client.CreateIndex(“stackoverflow”, i => indexDescriptor);
After defining our mappings and created an index, we can feed it with documents.
As Elasticsearch does not have any handler to import specific file formats such as XML or CSV, but since it has client libraries for different languages, it is quite easy to build an importer. As StackOverflow dump is in XML format, we use .NET XmlReader class to know question rows, map to an instance of Post and add objects to the collection.
Next, we need to repeat over batches of 1–10k objects and call the IndexMany method on the client:
int batch = 1000;
IEnumerable<Post> data = LoadPostsFromFile(path);
for each (var batches in data.Batch(batch))
{
client.IndexMany<Post>(batches, “stackoverflow”);
}
Full text search
Now since document database is populated, let’s define the search service interface:count);
public interface ISearchService<T>
{
SearchResult<T> Search(string query, int page, int pageSize);
SearchResult<Post> SearchByCategory(string query, IEnumerable<string> tags, int page, int pageSize);
IEnumerable<string> Autocomplete(string query, int count);
and a search result class:
public class SearchResult<T>
{
public int Total { get; set; }
public int Page { get; set; }
public IEnumerable<T> Results { get; set; }
public int ElapsedMilliseconds { get; set; }
}
The search method will perform the multi-match query against user input. The multi-match query is useful while running the query against multiple fields. By using this, we can see how relevant the Elasticsearch results are with the default configuration.
First, you require calling the Query method that is a container for any specific query we want to perform. Next, call the MultiMatch method which calls the Query method with the actual search phrase as a parameter and a list of fields that you want to search against. In our context, these are Title, Body, and Tags.
var result = client.Search(x => x // use search method
.Query(q => q // define query
.MultiMatch(mp => mp // of type MultiMatch
.Query(query) // pass text
.Fields(f => f // define fields to search against
.Fields(f1 => f1.Title, f2 => f2.Body, f3 => f3.Tags))))
.From(page — 1) // apply paging
.Size(pageSize)); // limit to page size
return new SearchResult<Post>
{
Total = (int)result.Total,
Page = page,
Results = result.Documents,
ElapsedMilliseconds = result.Took
};
The raw request to Elasticsearch will look like:
GET stackoverflow/post/_search
{
“query”:
{
“multi_match”:
{
“query”: “elastic search”,
“fields”: [“title”,”body”,”tags”]
}
}
}
How to group by tags
Once the search returns results, we would group them by tags so that users can refine their search. In order to cluster group result as categories, we use the bucket aggregations. They allow as to compose bucket of documents which falls into given criteria or not. As we want to cluster by tags, which is a text field, we will use the term aggregations.
Let’s look at feature on the Tags field
[String(Index = FieldIndexOption.NotAnalyzed)]
public IEnumerable<string> Tags { get; set; }
It commands Elasticsearch to neither analyze nor process the input and to find the field. It would not change ‘unit-testing’ tag to ‘unit’ and ‘testing’ etc.
Now, extending the search result class with a dictionary containing the tag name and the number of posts designed with this tag.
public Dictionary<string, long> AggregationsByTags { get; set; }
Next, we need to add Aggregation, of type Term, to our query and give it a name.
var result = client.Search<Post>(x => x
.Query(q => q
.MultiMatch(mp => mp
.Query(query)
.Fields(f => f
.Fields(f1 => f1.Title, f2 => f2.Body, f3 => f3.Tags))))
.Aggregations(a => a // aggregate results
.Terms(“by_tags”, t => t // use term aggregations and name it
.Field(f => f.Tags) // on field Tags
.Size(10))) // limit aggregation buckets
.From(page — 1)
.Size(pageSize));
The search results now contain aggregation results so the newly-added field is used to return it back to the caller:
AggregationsByTags = result.Aggs.Terms(“by_tags”).Items
.ToDictionary(x => x.Key, y => y.DocCount)
The next step allows users to select one or more tags and use them as a filter. On adding a new method to interface, it would enable us to pass the selected tags to the search method
SearchResult<Post> SearchByCategory(string query, IEnumerable<string> tags, int page, int pageSize);
In the method implementation, we need to track the tags into an array of filters.
var filters = tags
.Select(c => new Func<FilterDescriptor<Post>, FilterContainer>(x => x
.Term(f => f.Tags, c)));
Then, we need to build our search as a bool query. Bool queries combine multiple queries. The queries inside clauses will be used for searching documents and applying a relevance score to them.
Then we can append a Filter clause which also contains a Bool query which filters the result set.
var result = client.Search<Post>(x => x
.Query(q => q
.Bool(b => b
.Must(m => m // apply clause that must match
.MultiMatch(mp => mp // our initial search query
.Query(query)
.Fields(f => f
.Fields(f1 => f1.Title, f2 => f2.Body, f3 => f3.Tags))))
.Filter(f => f // apply filter on the results
.Bool(b1 => b1
.Must(filters))))) // with array of filters
.Aggregations(a => a
.Terms(“by_tags”, t => t
.Field(f => f.Tags)
.Size(10)))
.From(page — 1)
.Size(pageSize));
The aggregations work in the scope of a query so they return a number of documents in a filtered set.
Autocomplete attributes
One of the features that are frequently used in search forms is autocompleted.
Searching big sets of text data by only a few characters is not an easy task. Elasticsearch provides us completion suggester which works on a special field that is recorded in a way that helps very fast searching.
You need to decide which field or fields you want to autocomplete to act on and what results will be suggested. Elasticsearch enables to define both input and output.
Summary
This article would help you how to build a full-text search functionality .
Installation and configuration of Elasticsearch are very easy. The default configuration choices are just right to start working with. Elasticsearch doesn’t need a schema file and reveals a friendly JSON-based HTTP API for its configuration, index-population, and searching. The engine is optimized to work with a large amount of data.
You need a high-level .NET client to communicate with Elasticsearch so it fits nicely in .NET project.
Elasticsearch is an advanced search engine with many attributes and its own query DSL.
If you want to learn ASP.Net and perfect yourself in .NET training, our CRB Tech Solutions would be of great help and support for you. Join us with our updated program in ASP.Net course.
Stay connected to CRB Tech reviews for more technical optimization and other resources.
Most Related Articles :
Top 5 Reasons That Make ASP.NET More Secure Over PHP
ASP.NET LIFE CYCLE IN DEPTH
|
https://medium.com/@poojagaikwad/building-a-search-page-with-elasticsearch-and-net-ii-c2916e570068
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
Results 1 to 4 of 4
Thread: Object creation
- Join Date
- Feb 2010
- 6
- Thanks
- 0
- Thanked 0 Times in 0 Posts
Object creation
Hello,
I have one simple question. Why is it possible the following:
Code:
class Test1{ public void talk(){ System.out.println("test1"); } } class Test2 extends Test1{ @Override public void talk(){ System.out.println("test2"); } } public class MainTest{ public static void main(String args) { Test1 t = new Test2(); t.talk(); } }
- Join Date
- Jan 2016
- Location
- Malaysia
- 46
- Thanks
- 0
- Thanked 0 Times in 0 Posts
That's call inheritance
Here class Test2 extends class Test1 and overrides the method talk(). So, when we run the above code the method that gets called is talk() of class B.
- Join Date
- Sep 2014
- 233
- Thanks
- 0
- Thanked 44 Times in 42 Posts
It's actually called Polymorphism
Assume you have a base class of Shapes. Assume also that the class Shapes has a method called draw(). This class has descendants called circle, rectangle, etc. All of those descendants implements the method called draw.
Now, say you have a class called Canvas. This canvas will have children called Shapes. The canvas does not have to know how the draw method is implemented. All it has to do is to iterate on the children and invoke the function draw(). It will be up to the child to implement the draw function.
Object oriented implements what is called PIE or in some cases A-PIE.
See
- Join Date
- Feb 2016
- Location
- Keene, NH
- 2,716
- Thanks
- 2
- Thanked 395 Times in 385 Posts
See, you're probably butting heads with explanations like the one @josephm provided, that uses all sorts of fancy words (that for someone like me who learned objects in "smalltalk" reeks of throwing new fangled terms at the simplest of concepts) but doesn't actually give you a practical example in plain English.
In general that was one of the things about Java that always pissed me off -- the concepts are simple, the tutorials, books, and language specification MAKES IT HARD.
Lemme use the example that was in the Turbo Pascal 5.5 manual. It is far simpler to understand.
Let's say you're making a game that is going to have four legged animals in it. You make a single parent class that contains all the information common to all four legged animals (properties) as well as basic behaviors (methods) -- like walking, resting, distant sound, angry sound. This gives you all the common code and a single uniform API for accessing an animal.
You then make a "feline" class that extends "animal" -- it sets the distant sound to a roar and the angry sound to a low growl. This pattern works well for most of the big cats without significant changes.
But then you want a housecat... changing those sounds to meow and hiss.
Even with the changes to each and every class along the way, ALL the different objects despite not sharing the same code, share the same API -- the same methods and properties you can look at to find out about the animal, to make it move, or to have it make noise.
This is why you'll often see base classes -- a "prototype" as it were -- that has all the methods defined, but don't actually do anything... yet. It's partly as a reminder that children of it should implement those methods, and partly as documentation.
A real world example comes from web browsers -- the DOM. Document Object Model. The core class on the DOM is a 'node'. ALL HTML elements are an extension of type Node, and all plaintext is assigned to an extension of node as well. They share common values like nodeType, firstChild, lastChild, nextSibling, previousSibling, parentNode -- but differ in how information inside them is tracked such element nodes having id, classname, name, etc, which do not exist on text nodes.
That common ancestry allows you to "walk the DOM tree" consistently even though each of the child nodes are different classes.
So bottom line the reason is twofold. First to allow for prototyping of future code to a common API, second is to reduce the overall code by sharing like properties and methods when possible.I would rather have questions that can't be answered, than answers that can't be questioned.
|
https://www.codingforums.com/java-and-jsp/390332-object-creation.html?s=8e47b12f1136d0a502edff662305c5bd
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
install-default-webapps-in-launcher.py crashed with signal 5 in g_settings_get_mapped()
Bug Description
o sitema não emula corretamente travando.
ProblemType: Crash
DistroRelease: Ubuntu 13.10
Package: unity-webapps-
ProcVersionSign
Uname: Linux 3.10.0-6-generic x86_64
NonfreeKernelMo
ApportVersion: 2.11-0ubuntu1
Architecture: amd64
Date: Mon Jul 29 21:29:14 2013
ExecutablePath: /usr/share/
InstallationDate: Installed on 2013-07-14 (15 days ago)
InstallationMedia: Ubuntu 13.10 "Saucy Salamander" - Alpha amd64 (20130529)
InterpreterPath: /usr/bin/python2.7
MarkForUpload: True
PackageArchitec
ProcCmdline: /usr/bin/python /usr/share/
Signal: 5
SourcePackage: webapps-
StacktraceTop:
g_settings_
ffi_call_unix64 () from /usr/lib/
ffi_call () from /usr/lib/
g_callable_
g_function_
Title: install-
UpgradeStatus: No upgrade log present (probably fresh install)
UserGroups:
Related branches
- PS Jenkins bot: Approve (continuous-integration) on 2013-08-05
- Alexandre Abreu: Pending requested 2013-08-05
- Diff: 43 lines (+8/-14)1 file modifiedscripts/install-default-webapps-in-launcher.py (+8/-14)
Status changed to 'Confirmed' because the bug affects multiple users.
This error pops up in default install, after booting and logging in for the first time.
In my case this crash happens every time logging is done.
This bug has been reported on the Ubuntu ISO testing tracker.
A list of all reports related to this bug can be found here:
http://
Line 1520 in that file looks like so:
g_error ("The mapping function given to g_settings_
"'%s' in schema '%s' returned FALSE when given a NULL value.",
key, g_settings_
which is a pretty good indicator of what's probably happening here.
Quoth the docs:
If the mapping function fails for all possible values, one additional attempt is made: the mapping function is called with a NULL value. If the mapping function still indicates failure at this point then the application will be aborted.
So indeed, the script contains this code:
first_mapping = [True]
def mapping_
if not value:
return (True, [])
if first_mapping[0]:
return (False, value)
return (True, value)
def get_default_
return settings.
Which looks like it's fine because of:
if not value:
return (True, [])
but for some reason, the python bindings are unable to deal with this. Running the script gives warnings like so:
** (process:21913): WARNING **: (pygi-argument.
before crashing with the error from the previous comment.
An easier way of getting the default value of a key (and the one most commonly used by other developers) is this pattern:
1) create a GSettings object
2) .delay()
3) .reset('key')
4) .get('key')
5) free the object without calling .apply()
Thanks for clearing this up Ryan, I'll submit a patch for this shortly.
Fix committed into lp:webapps-applications at revision 553, scheduled for release in webapps-
This is actually just Fix Committed as the fix is still in the -proposed repository. Launchpad will automatically set this to Fix Released when the package is available in the archive.
Oops, sorry. I just saw http://
This bug has been reported on the Ubuntu Package testing tracker.
A list of all reports related to this bug can be found here:
http://
Happens automatically when launching the saucy salamander image (both 32b and 64b).
Ubuntu 13.10 (Saucy Salamander) 64-bit PC (AMD64) desktop image
http://
I think it might be that my system is intel and not AMD but still 64-bit
This bug was fixed in the package webapps-
---------------
webapps-
[ Robert Bruce Park ]
* Get default gsettings value without crashing. (LP: #1206314). (LP:
#1206314)
[ Ubuntu daily release ]
* Automatic snapshot from revision 553
-- Ubuntu daily release <email address hidden> Wed, 14 Aug 2013 18:05:27 +0000
Intel Westmere board install of web apps failed at first login after install.
Jake, are you running raring? If not, just update to the latest saucy packages, this was fixed a while ago.
I just tried installing from the saucy 13.10 final beta, and hit this issue!
StacktraceTop:
get_mapped (settings= 0x2019030, key=0x1ed6f40 "favorites", mapping= 0x7f2c52be7010, user_data= 0x2018ed0) at /build/ buildd/ glib2.0- 2.37.3/ ./gio/gsettings .c:1520 x86/unix64. S:76 entry=0x7fff66d 7aa90, fn=fn@entry= 0x7f2c5040eec0 <g_settings_ get_mapped> , rvalue= rvalue@ entry=0x7fff66d 7aa70, avalue= avalue@ entry=0x7fff66d 7a990) at ../src/ x86/ffi64. c:522 info_invoke (info=info@ entry=0x1f90a30 , function= 0x7f2c5040eec0 <g_settings_ get_mapped> , in_args= in_args@ entry=0x1ed7240 , n_in_args= n_in_args@ entry=4, out_args= out_args@ entry=0x0, n_out_args= n_out_args@ entry=0, return_ value=return_ value@entry= 0x7fff66d7ac68, is_method= is_method@ entry=1, throws=0, error=error@ entry=0x7fff66d 7ac18) at girepository/ gicallableinfo. c:680 info_invoke (info=0x1f90a30, in_args=0x1ed7240, n_in_args=4, out_args=0x0, n_out_args=0, return_ value=0x7fff66d 7ac68, error=0x7fff66d 7ac18) at girepository/ gifunctioninfo. c:274
g_settings_
ffi_call_unix64 () at ../src/
ffi_call (cif=cif@
g_callable_
g_function_
|
https://bugs.launchpad.net/ubuntu/+source/webapps-applications/+bug/1206314
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
The Coherence Extend C++ API contains a robust C++ object model. The object model is the foundation on which Coherence for C++ is built.
The following sections are included in this chapter:
Writing New Managed Classes
Diagnostics and Troubleshooting
Application Launcher - Sanka
The following sections contains general information for writing code which uses the object model.
This coherence namespace contains the following general purpose namespaces:
coherence::lang—the essential classes that comprise the object model
coherence::util—utility code, including collections
coherence::net—network and cache
coherence::stl—C++ Standard Template Library integration
coherence::io—serialization
Although each class is defined within its own header file, you can use namespace-wide header files to facilitate the inclusion of related classes. As a best practice include, at a minimum,
coherence/lang.ns in code that uses this object model.
The
coherence::lang::Object class is the root of the class hierarchy. This class provides the common interface for abstractly working with Coherence class instances. Object is an instantiable class that provides default implementations for the following functions.
equals
hashCode
clone (optional)
toStream (that is, writing an
Object to an
std::ostream)
See
coherence::lang::Object in the C++ API for more information.
In addition to its public interface, the Object class provides several features used internally. Of these features, the reference counter is perhaps the most important. It provides automatic memory management for the object. This automatic management eliminates many of the problems associated with object reference validity and object deletion responsibility. This management reduces the potential of programming errors which may lead to memory leaks or corruption. This results in a stable platform for building complex systems.
The reference count, and other object "life-cycle" information, operates in an efficient and thread-safe manner by using lock-free atomic compare-and-set operations. This allows objects to be safely shared between threads without the risk of corrupting the count or of the object being unexpectedly deleted due to the action of another thread.
To track the number of references to a specific object, there must be a level of cooperation between pointer assignments and a memory manager (in this case the object). Essentially the memory manager must be informed each time a pointer is set to reference a managed object. Using regular C++ pointers, the task of informing the memory manager would be left up to the programmer as part of each pointer assignment. In addition to being quite burdensome, the effects of forgetting to inform the memory manager would lead to memory leaks or corruption. For this reason the task of informing the memory manager is removed from the application developer, and placed on the object model, though the use of smart pointers. Smart pointers offer a syntax similar to normal C++ pointers, but they do the bookkeeping automatically.
The Coherence C++ object model contains a variety of smart pointer types, the most prominent being:
View—A smart pointer that can call only
const methods on the referenced object
Handle—A smart pointer that can call both
const and non-
const methods on the referenced object.
Holder—A special type of handle that enables you to reference an object as either
const or non-
const. The holder remembers how the object was initially assigned, and returns only a compatible form.
Other specialized smart pointers are described later in this section, but the View, Handle, and Holder smart pointers are used most commonly.
Note:In this documentation, the term handle (with a lowercase "h") refers to the various object model smart pointers. The term Handle (with an uppercase "H") refers to the specific Handle smart pointer.
By convention each managed class has these nested-types corresponding to these handles. For instance the managed
coherence::lang::String class defines
String::Handle,
String::View,
String::Holder.
Assignment of handles follows normal inheritance assignment rules. That is, a Handle may be assigned to a View, but a View may not be assigned to a Handle, just like a
const pointer cannot be assigned to a non-
const pointer.
When dereferencing a handle that references NULL, the system throws a
coherence::lang::NullPointerException instead of triggering a traditional segmentation fault.
For example, this code would throw a
NullPointerException if
hs
==
NULL:
String::Handle hs = getStringFromElsewhere(); cout << "length is " << hs->length() << end1;
All managed objects are heap allocated. The reference count—not the stack—determines when an object can be deleted. To prevent against accidental stack-based allocations, all constructors are marked protected, and public factory methods are used to instantiate objects.
The factory method is named create and there is one create method for each constructor. The create method returns a Handle rather than a raw pointer. For example, the following code creates a new instance of a string:
String::Handle hs = String::create("hello world");
By comparison, these examples are incorrect and do not compile:
String str("hello world"); String* ps = new String("hello world");
All objects within the model, including strings, are managed and extend from
Object. Instead of using
char* or
std::string, the object model uses its own managed
coherence::lang::String class. The
String class supports ASCII and the full Unicode BML character set.
String objects can easily be constructed from
char* or
std::string strings, as shown in these examples:
Example 9-1 Examples of Constructing String Objects
const char* pcstr = "hello world"; std:string stdstr(pcstr); String::Handle hs = String::create(pcstr); String::Handle hs2 = String::create(stdstr);
The managed string is a copy of the supplied string and contains no references or pointers to the original. You can convert back, from a managed String to any other string type, by using
getCString() method. This returns a pointer to the original
const
char*. Strings can also be created using the standard C++
<< operator, when coupled with the
COH_TO_STRING macro.
To facilitate the use of quoted string literals, the
String::Handle and
String::View support auto-boxing from
const
char*, and
const std::string. Auto-boxing allows the code shown in the prior samples to be rewritten:
Auto-boxing is also available for other types. See
coherence::lang::BoxHandle for details.
Handles are type safe, in the following example, the compiler does not allow you to assign an
Object::Handle to a
String::Handle, because not all Objects are Strings.
Object::Handle ho = getObjectFromSomewhere(); String::Handel hs = ho; // does not compile
However, Table 9-0 does compile, as all Strings are Objects.
Example 9-4 Type Safe Casting Examples
String::Handle hs = String::create("hello world"); Object::Handle ho = hs; // does compile
For situations in which you want to down-cast to a derived Object type, you must perform a dynamic cast using the C++ RTTI (run-time type information) check and ensure that the cast is valid. The Object model provides helper functions to ease the syntax.
cast<H>(o)—attempt to transform the supplied handle
o to type
H, throwing an
ClassCastException on failure
instanceof<H>(o)—test if a cast of
o to
H is allowable, returning
true for success, or
false for failure
These functions are similar to the standard C++
dynamic_cast<T>, but do not require access to the raw pointer.
The following example shows how to down cast a
Object::Handle to a
String::Handle:
Example 9-5 Down Casting Examples
Object::Handle ho = getObjectFromSomewhere(); String::Handle hs = cast<String::Handle>(ho);
The
cast<H> function throws a
coherence::lang::ClassCastException if the supplied object was not of the expected type. The
instanceof<H> function tests if an Object is of a particular type without risking an exception being thrown. Such checks or generally only needed for places where the actual type is in doubt.
Example 9-6 Object Type Checking with the instanceof<H> Function
Object::Handle ho = getObjectFromSomewhere(); if (instanceof<String::Handle>(ho)) { String::Handle hs = cast<String::Handle>(ho); } else if (instanceof<Integer32::Handle>(ho)) { Integer32::Handle hn = cast<Integer32::Handle>(ho); } else { ... }
Managed arrays are provided by using the
coherence::lang::Array<T> template class. In addition to being managed and adding safe and automatic memory management, this class includes the overall length of the array, and bounds checked indexing.
You can index an array by using its Handle's subscript operator, as shown in this example:
Example 9-7 Indexing an Array
Array<int32_t>::Handle harr = Array<int32_t>::create(10); int32_t nTotal = 0; for (size32_t i = 0, c = harr->length; i < c; ++i) { nTotal += harr[i]; }
The object model supports arrays of C++ primitives and managed
Objects. Arrays of derived
Object types are not supported, only arrays of
Object, casting must be employed to retrieve the derived handle type. Arrays of Objects are technically
Array<MemberHolder<Object>
>, and defined to
ObjectArray for easier readability.
The
coherence::util* namespace includes several collection classes and interfaces that may be useful in your application. These include:
coherence::util::Collection —interface
coherence::util::List—interface
coherence::util::Set—interface
coherence::util::Queue—interface
coherence::util::Map—interface
coherence::util::Arrays—implementation
coherence::util::LinkedList—implementation
coherence::util::HashSet—implementation
coherence::util::DualQueue—implementation
coherence::util::HashSet—implementation
coherence::util::SafeHashMap—implementation
coherence::util::WeakHashMap—implementation
coherence::util::IdentityHashMap—implementation
These classes also appear as part of the Coherence Extend API.
Similar to
ObjectArray,
Collections contain
Object::Holders, allowing them to store any managed object instance type.
In the object model, exceptions are also managed objects. Managed Exceptions allow caught exceptions to be held as a local variable or data member without the risk of object slicing.
All Coherence exceptions are defined by using a
throwable_spec and derive from the
coherence::lang::Exception class, which derives from
Object. Managed exceptions are not explicitly thrown by using the standard C++
throw statement, but rather by using a
COH_THROW macro. This macro sets stack information and then calls the exception's
raise method, which ultimately calls throw. The resulting thrown object may be caught an the corresponding exceptions
View type, or an inherited
View type. Additionally these managed exceptions may be caught as standard
const
std::exception classes. The following example shows a try/catch block with managed exceptions:
Example 9-9 A Try/Catch Block with Managed Exceptions
try { Object::Handle h = NULL; h->hashCode(); // trigger an exception } catch (NullPointerException::View e) { cerr << "caught" << e <<endl; COH_THROW(e); // rethrow }
Note:This exception could also have been caught as
Exception::Viewor
const std::exception&.
In C++ the information of how an object was declared (such as const) is not available from a pointer or reference to an object. For instance a pointer of type
const Foo*, only indicates that the user of that pointer cannot change the objects state. It does not indicate if the referenced object was actually declared const, and is guaranteed not to change. The object model adds a run-time immutability feature to allow the identification of objects which can no longer change state.
The
Object class maintains two reference counters: one for Handles and one for Views. If an object is referenced only from Views, then it is by definition immutable, as Views cannot change the state, and Handles cannot be obtained from Views. The
isImmutable() method (included in the
Object class) can test for this condition. The method is virtual, allowing subclasses to alter the definition of immutable. For example, String contains no non-const methods, and therefore has an isImmutable() method that always returns true.
Note that when immutable, an object cannot revert to being mutable. You cannot cast away
const-ness to turn a
View into a
Handle as this would violate the proved immutability.
Immutability is important with caching. The Coherence
NearCache and
ContinuouQueryCache can take advantage of the immutability to determine if a direct reference of an object can be stored in the cache or if a copy must be created. Additionally, knowing that an object cannot change allows safe multi-threaded interaction without synchronization.
Frequently, existing classes must be integrated into the object model. A typical example would be to store a data-object into a Coherence cache, which only supports storage of managed objects. As it would not be reasonable to require that pre-existing classes be modified to extend from
coherence::lang::Object, the object model provides an adapter which automatically converts a non-managed plain old C++ class instance into a managed class instance at run time.
This is accomplished by using the
coherence::lang::Managed<T> template class. This template class extends from Object and from the supplied template parameter type
T, effectively producing a new class which is both an Object and a
T. The new class can be initialized from a
T, and converted back to a
T. The result is an easy to use, yet very powerful bridge between managed and non-managed code.
See the API doc for
coherence::lang::Managed for details and examples.
The following section provides information necessary to write new managed classes, that is, classes which extend from
Object. The creation of new managed classes is required when you are creating new
EventListeners,
EntryProcessors, or
Filter types. They are not required when you are working with existing C++ data objects or making use of the Coherence C++ API. See the previous section for details on integration non-managed classes into the object model.
Specification-based definitions (specs) enable you to quickly define managed classes in C++.
Specification-based definitions are helpful when you are writing your own implementation of managed objects.
There are various forms of specs used to create different class types:
class_spec—standard instantiatable class definitions
cloneable_spec—cloneable class definitions
abstract_spec—non-instantiatable class definitions, with zero or more pure virtual methods
interface_spec—for defining interfaces (pure virtual, multiply inheritable classes)
throwable_spec—managed classes capable of being thrown as exceptions
Specs automatically define these features on the class being spec'd:
Handles, Views, Holders
static
create() methods which delegate to protected constructors
virtual
clone() method delegating to the copy constructor
virtual
sizeOf() method based on
::sizeof()
super
typedef for referencing the class from which the defined class derives
inheritance from
coherence::lang::Object, when no parent class is specified by using
extends<>
To define a class using specs, the class publicly inherits from the specs above. Each of these specs are parametrized templates. The parameters are as follows:
The name of the class being defined.
The class to publicly inherit from, specified by using an
extends<> statement, defaults to
extends<Object>
This element is not supplied in interface_spec
Except for
extends<Object>, the parent class is not derived from virtually
A list of interfaces implemented by the class, specified by using an
implements<> statement
All interfaces are derived from using public virtual inheritance
Note that the
extends<> parameter is note used in defining interfaces.
Example 9-10 illustrates using
interface_spec to define a
Comparable interface:
Example 9-10 An Interface Defined by interface_spec
class Comparable : public interface_spec<Comparable> { public: virtual int32_t compareTo(Object::View v) const = 0; };
Example 9-11 illustrates using
interface_spec to define a derived interface
Number:
Example 9-11 A Derived Interface Defined by interface_spec
class Number : public interface_spec<Number, implements<Comparable> > { public: virtual int32_t getValue() const = 0; };
Next a
cloneable_spec is used to produce an implementation. This is illustrated in in Example 9-12.
Note:To support the auto-generated create methods, instantiatable classes must declare the
coherence::lang::factory<>template as a friend. By convention this is the first statement within the class body.
Example 9-12 An Implementation Defined by cloneable_spec
class Integer : public cloneable_spec<Integer, extends<Object>, implements<Number> > { friend class factory<Integer>; protected: Integer(int32_t n) : super(), m_n(n) { } Integer(const Integer& that) : super(that), m_n(that.m_n) { } public: virtual int32_t getValue() const { return m_n; } virtual int32_t compareTo(Object::View v) const { return getValue() - cast<Integer::View>(v)->getValue(); } virtual void toStream(std::ostream& out) const { out << getValue(); } private: int32_t m_n; };
The class definition in Example 9-12 is the equivalent the non-spec based definitions in Example 9-13.
Example 9-13 Defining a Class Without the use of specs
class Integer : public virtual Object, public virtual Number { public: typedef TypedHandle<const Integer> View; // was auto-generated typedef TypedHandle<Integer> Handle; // was auto-generated typedef TypedHolder<Integer> Holder; // was auto-generated typedef super Object; // was auto-generated // was auto-generated static Integer::Handle create(const int32_t& n) { return new Integer(n); } protected: Integer(int32_t n) : super(), m_n(n) { } Integer(const Integer& that) : super(that), m_n(that.n) { } public: virtual int32_t getValue() const { return m_n; } virtual int32_t compareTo(Object::View v) const { return getValue() - cast<Integer::View>(v)->getValue(); } virtual void toStream(std::ostream& out) const { out << getValue(); } // was auto-generated virtual Object::Handle clone() const { return new Integer(*this); } // was auto-generated virtual size32_t sizeOf() const { return ::sizeof(Integer); } private: int32_t m_n; };
Example 9-14 illustrates using the spec'd class:
Equality, Hashing, Cloning, Immutability, and Serialization all identify the state of an object and generally have similar implementation concerns. Simply put, all data members referenced in one of these methods, are likely referenced in all of the methods. Conversely any data members which are not referenced by one, should likely not be referenced by any of these methods.
Consider the simple case of a
HashSet::Entry, which contains the well known key and value data members. These are to be considered in the equals method and would likely be tested for equality by using a call to their own equals method rather than through reference equality. If
Entry also contains, as part of the implementation of the
HashSet, a handle to the next
Entry within the
HashSet's bucket and perhaps also contains a handle back to the
HashSet itself, should these be considered in equals as well? Likely not, it would seem reasonable that comparing two entries consisting of equal keys and values from two maps should be considered equal. Following this line of thought the
hashCode method on
Entry would completely ignore data members except for key and value, and
hashCode would be computed using the results of its key and value
hashCode, rather then using their identity
hashCode. that is, a deep equality check in equals implies a deep hash in
hashCode.
For clone, only the key and value (not all the data members) require cloning. To clone the parent
Map as part of clone, the
Entry would make no sense and a similar argument can be made for cloning the handle to the next
Entry. This line of thinking can be extended to the
isImmutable method, and to serialization as well. While it is certainly not a hard and fast rule, it is worth considering this approach when implementing any of these methods.
The object model includes managed threads, which allows for easy creation of platform independent, multi-threaded, applications. The threading abstraction includes support for creating, interrupting, and joining threads. Thread local storage is available from the
coherence::lang::ThreadLocalreference class. Thread dumps are also available for diagnostic and troubleshooting purposes. The managed threads are ultimately wrappers around the system's native thread type, such as POSIX or Windows Threads. This threading abstraction is used internally by Coherence, but is available for the application, if necessary.
Example 9-15 illustrates how to create a
Runnable instance and spawn a thread:
Example 9-15 Creating a Runnable Instance and Spawning a Thread
class HelloRunner : public class_spec<HelloRunner, extends<Object>, implements<Runnable> > { friend class factory<HelloRunner>; protected: HelloRunner(int cReps) : super(), m_cReps(cReps) { } public: virtual void run() { for (int i = 0; i < m_Reps; ++i) { Thread::sleep(1000); std::cout << "hello world" << std::endl; } } protected: int m_cReps; }; ... Thread::Handle hThread = Thread::create(HelloRunner::create(10)); hThread->start(); hThread->join();
Refer to
coherence::lang::Thread and
coherence::lang::Runnable for more information.
The primary functional limitation of a reference counting scheme is automatic cleanup of cyclical object graphs. Consider the simple bi-directional relationship illustrated in Figure 9-1.
In this picture, both A and B have a reference count of one, which keeps them active. What they do not realize is that they are the only things keeping each other active, and that no external references to them exist. Reference counting alone is unable to handle these self sustaining graphs and memory would be leaked.
The provided mechanism for dealing with graphs is weak references. A weak reference is one which references an object, but not prevent it from being deleted. As illustrated in Figure 9-2, the A->B->A issue could be resolved by changing it to the following.
Where A now has a weak reference to B. If B were to reach a point where it was only referenced weakly, it would clear all weak references to itself and then be deleted. In this simple example that would also trigger the deletion of A, as B had held the only reference to A.
Weak references allow for construction of more complicated structures then this. But it becomes necessary to adopt a convention for which references are weak and which are strong. Consider a tree illustrated in Figure 9-3. The tree consists of nodes A, B, C; and two external references to the tree X, and Y.
In this tree parent (A) use strong references to children (B, C), and children use weak references to their parent. With the picture as it is, reference Y could navigate the entire tree, starting at child B, and moving up to A, and then down to C. But what if reference X were to be reset to NULL? This would leave A only being weakly referenced and it would clear all weak references to itself, and be deleted. In deleting itself there would no longer be any references to C, which would also be deleted. At this point reference Y, without having taken any action would now refer to the situation illustrated in Figure 9-4.
This is not necessarily a problem, just a possibility which must be considered when using weak references. To work around this issue, the holder of Y would also likely maintain a reference to A to ensure the tree did not dissolve away unexpectedly.
See the Javadoc for
coherence::lang::WeakReference,
WeakHandle, and
WeakView for usage details.
As is typical in C++, referencing an object under construction can be dangerous. Specifically references to
this are to be avoided within a constructor, as the object initialization has not yet completed. For managed objects, creating a handle to
this from the constructor usually causes the object to be destructed before it ever finishes being created. Instead, the object model includes support for virtual constructors. The virtual constructor
onInit is defined by
Object and can be overridden on derived classes. This method is called automatically by the object model just after construction completes, and just before the new object is returned from its static create method. Within the
onInit method, it is safe to reference
this to call virtual functions and to hand out references to the new object to other class instances. Any derived implementation of
onInit must include a call to
super::onInit() to allow the parent class to also initialize itself.
In addition to the Handle and View smart pointers (discussed previously), the object model contains several other specialized variants that can be used. For the most part use of these specialized smart pointers is limited to writing new managed classes, and they do not appear in normal application code.
Although the base
Object class is thread-safe, this cannot provide automatic thread safety for the state of derived classes. As is typical it is up to each individual derived class implementation to provide for higher level thread-safety. The object model provides some facilities to aid in writing thread-safe code.
Every
Object in the object model can be a point of synchronization and notification. To synchronize an object and acquire its internal monitor, use a
COH_SYNCHRONIZED macro code block, as shown in Example 9-16:
Example 9-16 A Sample COH_SYNCHRONIZED Macro Code Block
SomeClass::Handle h = getObjectFromSomewhere(); COH_SYNCHRONIZED (h) { // monitor of Object referenced by h has been acquired if (h->checkSomeState()) { h->actOnThatState(); } } // monitor is automatically released
The
COH_SYNCHRONIZED block performs the monitor acquisition and release. You can safely exit the block with
return,
throw,
COH_THROW,
break,
continue, and
goto statements.
The
Object class includes
wait(),
wait(timed),
notify(), and
notifyAll() methods for notification purposes. To call these methods, the caller must have acquired the
Objects's monitor. Refer to
coherence::lang::Object for details.
Read-write locks are also provided, see
coherence::util::ThreadGate for details.
The Handle, View, and Holder nested types defined on managed classes are intentionally not thread-safe. That is it is not safe to have multiple threads share a single handle. There is an important distinction here: thread-safety of the handle is being discussed not the object referenced by the handle. It is safe to have multiple distinct handles that reference the same object from different threads without additional synchronization.
This lack of thread-safety for these handle types offers a significant performance optimization as the vast majority of handles are stack allocated. So long as references to these stack allocated handles are not shared across threads, there is no thread-safety issue to be concerned with.
Thread-safe handles are needed any time a single handle may be referenced by multiple threads. Typical cases include:
Global handles - using the standard handle types as global or static variable is not safe.
Non-managed multi-threaded application code - Use of standard handles within data structures which may be shared across threads is unsafe.
Managed classes with handles as data members - It should be assumed that any instance of a managed class may be shared by multiple threads, and thus using standard handles as data members is unsafe. Note that while it may not be strictly true that all managed classes may be shared across threads, if an instance is passed to code outside of your explicit control (for instance put into a cache), there is no guarantee that the object is not visible to other threads.
The use of standard handles should be replaced with thread-safe handles in such cases. The object model includes the following set of thread-safe handles.
coherence::lang::MemberHandle<T>—thread-safe version of
T::Handle
coherence::lang::MemberView
<T>—thread-safe version of
T::View
coherence::lang::MemberHolder
<T>—thread-safe version of
T::Holder
coherence::lang::FinalHandle<T>—thread-safe final version of
T::Handle
coherence::lang::FinalView<T>—thread-safe final version of
T::View
coherence::lang::FinalHolder<T>—thread-safe final version of
T::Holder
coherence::lang::WeakHandle
<T>—thread-safe weak handle to
T
coherence::lang::WeakView
<T>—thread-safe weak view to
T
coherence::lang::WeakHolder<T>—thread-safe weak
T::Holder
These handle types may be read and written from multiple thread without the need for additional synchronization. They are primarily intended for use as the data-members of other managed classes, each instance is provided with a reference to a guardian managed
Object. The guardian's internal thread-safe atomic state is used to provide thread-safety to the handle. When using these handle types it is recommended that they be read into a normal stack based handle if they are continually accessed within a code block. This assignment to a standard stack based handle is thread-safe, and, after completed, allows for essentially free dereferencing of the stack based handle. Note that when initializing thread-safe handles a reference to a guardian
Object must be supplied as the first parameter, this reference can be obtained by calling
self() on the enclosing object.
Example 9-17 illustrates a trivial example:
Example 9-17 Thread-safe Handle
class Employee : public class_spec<Employee> { friend class factory<Employee>; protected: Employee(String::View vsName, int32_t nId) : super(), m_vsName(self(), vsName), m_nId(nId) { } public: String::View getName() const { return m_vsName; // read is automatically thread-safe } void setName(String::View vsName) { m_vsName = vsName; // write is automatically thread-safe } int32_t getId() const { return m_nId; } private: MemberView<String> m_vsName; const int32_t m_nId; };
The same basic technique can be applied to non-managed classes as well. Since non-managed classes do not extend
coherence::lang::Object, they cannot be used as the guardian of thread-safe handles. It is possible to use another
Object as the guardian. However, it is crucial to ensure that the guardian
Object outlives the guarded thread-safe handle. When using another object as the guardian, obtain a random immortal guardian from
coherence::lang::System through a call to
System::common() as illustrated in Example 9-18:
Example 9-18 Thread-safe Handle as a Non-Managed Class
class Employee { public: Employee(String::View vsName, int32_t nId) : m_vsName(System::common(), vsName), m_nId(nId) { } public: String::View getName() const { return m_vsName; } void setName(String::View vsName) { m_vsName = vsName; } int32_t getId() const { return m_nId; } private: MemberView<String> m_vsName; const int32_t m_nId; };
When writing managed classes it is preferable to obtain a guardian through a call to
self() then to
System::common().
Note:In the rare case that one of these handles is declared through the
mutablekeyword, it must be informed of this fact by setting
fMutableto
trueduring construction.
Thread-safe handles can also be used in non-class shared data as well. For example, global handles:
MemberView<NamedCache> MY_CACHE(System::common()); int main(int argc, char** argv) { MY_CACHE = CacheFactory::getCache(argv[0]); }
The object model includes escape analysis based optimizations. The escape analysis is used to automatically identify when a managed object is only visible to a single thread and in such cases optimize out unnecessary synchronizations. The following types of operations are optimized for non-escaped objects.
reference count updates
COH_SYNCHRONIZED acquisition and release
reading/writing of thread-safe handles
reading of thread-safe handles from immutables
Escape analysis is automatic and is completely safe so long as you follow the rules of using the object model. Most specifically is that it is not safe to pass a managed object between threads without using a provided thread-safe handle. Passing it by an external mechanism does not allow escape analysis to identify the "escape" which could cause memory corruption or other run-time errors.
Each managed class type includes nested definitions for a Handles, View, and Holder. These handles are used extensively throughout the Coherence API, and is application code. They are intended for use as stack based references to managed objects. They are not intended to be made visible to multiple threads. That is a single handle should not be shared between two or more threads, though it is safe to have a managed Object referenced from multiple threads, so long as it is by distinct Handles, or a thread-safe MemberHandle/View/Holder.
It is important to remember that global handles to managed Objects should be considered to be "shared", and therefore must be thread-safe, as demonstrated previously. The failure to use thread-safe handles for globals causes escaped objects to not be properly identified leading to memory corruption.
In 3.4 these non thread-safe handles could be shared across threads so long as external synchronization was employed, or if the handles were read-only. In 3.5 and later this is no longer true, even when used in a read-only mode or enclosed within external synchronization these handles are not thread-safe. This is due to a fundamental change in implementation which drastically reduces the cost of assigning one handle to another, which is an operation which occurs constantly. Any code which was using handles in this fashion should be updated to make use of thread-safe handles. See "Thread Safe Handles" for more information.
Coherence escape analysis, among other things, leverages the computed mutability of an object to determine if state changes on data members are still possible. Namely, when an object is only referenced from views, it is assumed that its data members do not undergo further updates. The C++ language provides some mechanisms to bypass this const-only access and allow mutation from const methods. For instance, the use of the mutable keyword in a data member declaration, or the casting away of constness. The arguably cleaner and supported approach for the object model is the mutable keyword. For the Coherence object model, when a thread-safe data member handle is declared as mutable this information must be communicated to the data member. All thread-safe data members support an optional third parameter fMutable which should be set to true if the data member has been declared with the mutable keyword. This informs the escape analysis routine to not consider the data member as "const" when the enclosing object is only referenced using Views. Casting away of the constness of managed object is not supported, and can lead to run time errors if the object model believes that the object can no longer undergo state changes.
Coherence for C++ includes a thread-local allocator to improve performance of dynamic allocations which are heavily used within the API. By default, each thread grows a pool to contain up to 64KB of reusable memory blocks to satisfy the majority of dynamic object allocations. The pool is configurable using the following system properties:
tangosol.coherence.slot.size controls the maximum size of an object which is considered for allocation from the pool, the default is 128 bytes. Larger objects call through to the system's
malloc routine to obtain the required memory.
tangosol.coherence.slot.count controls the number of slots available to each thread for handling allocations, the default is 512 slots. If there are no available slots, allocations fall back on
malloc.
tangosol.coherence.slot.refill controls the rate at which slots misses trigger refilling the pool. The default of 10000 causes 1/10000 pool misses to force an allocation which is eligible for refilling the pool.
The pool allocator can be disabled by setting the size or count to
0.
This section provides information which can aid in diagnosing issues in applications which make use of the object mode.
Thread dumps are available for diagnostic and troubleshooting purposes. These thread dumps also include the stack trace. You can generate a thread dump by performing a
CTRL+BREAK (Windows) or a
CTRL+BACKSLASH (UNIX). Example 9-19 illustrates a sample thread dump:
Example 9-19 Sample Thread Dump
Thread dump Oracle Coherence for C++ v3.4b397 (Pre-release) (Apple Mac OS X x86 debug) pid=0xf853; spanning 190ms "main" tid=0x101790 runnable: <native> at coherence::lang::Object::wait(long long) const at coherence::lang::Thread::dumpStacks(std::ostream&, long long) at main at start "coherence::util::logging::Logger" tid=0x127eb0 runnable: Daemon{State=DAEMON_RUNNING, Notification=false, StartTimeStamp=1216390067197, WaitTime=0, ThreadName=coherence::util::logging::Logger} at coherence::lang::Object::wait(long long) const at coherence::component::util::Daemon::onWait() at coherence::component::util::Daemon::run() at coherence::lang::Thread::run()
While the managed object model reference counting helps prevent memory leaks they are still possible. The most common way in which they are triggered is through cyclical object graphs. The object model includes heap analysis support to help identify if leaks are occurring, by tracking the number of live objects in the system. Comparing this value over time provides a simple means of detecting if the object count is consistently increasing, and thereby likely leaking. After a probable leak has been detected, the heap analyzer can help track it down as well, by provided statistics on what types of objects appeared to have leaked.
Coherence provides a pluggable
coherence::lang::HeapAnalyzer interface. The
HeapAnalyzer implementation can be specified by using the
tangosol.coherence.heap.analyzer system property. The property can be set to the following values:
none—No heap analysis is performed. This is the default.
object—The
coherence::lang::ObjectCountHeapAnalyzer is used. It provides simple heap analysis based solely on the count of the number of live objects in the system.
class—The
coherence::lang::ClassBasedHeapAnalyzer is used. It provides heap analysis at the class level, that is it tracks the number of live instances of each class, and the associated byte level usage.
alloc —Specialization of
coherence::lang::ClassBasedHeapAnalyzer which additionally tracks the allocation counts at the class level.
custom—Lets you define your own analysis routines. You specify the name of a class registered with the
SystemClassLoader.
Heap information is returned when you perform a
CTRL+BREAK (Windows) or
CTRL+BACKSLASH (UNIX).
Example 9-20 illustrates heap analysis information returned by the class-based analyzer. It returns the heap analysis delta resulting from the insertion of a new entry into a
Map.
For all that the object model does to prevent memory corruption, it is typically used along side non-managed code which could cause corruption. Therefore, the object model includes memory corruption detection support. When enabled, the object model's memory allocator pads the beginning and end of each object allocation by a configurable number of pad bytes. This padding is encoded with a pattern which can later be validated to ensure that the pad has not been touched. If memory corruption occurs, and affects a pad, subsequent validations detect the corruption. Validation is performed when the object is destroyed.
The debug version of the Coherence C++ API has padding enabled by default, using a pad size of 2*(word size), on each side of an object allocation. In a 32-bit build, this adds 16 bytes per object. Increasing the size of the padding increases the chances of corruption affecting a pad, and thus the chance of detecting corruption.
The size of the pad can be configured by using the tangosol.coherence.heap.padding system property, which can be set to the number of bytes for the pre/post pad. Setting this system property to a nonzero value enables the feature, and is available even in release builds.
Example 9-21 illustrates the results from an instance of memory corruption detection:
Coherence uses an application launcher for invoking executable classes embedded within a shared library. The launcher allows for a shared library to contain utility or test executables without shipping individual standalone executable binaries.
The launcher named
sanka works similar to
java, in that it is provided with one or more shared libraries to load, and a fully qualified class name to execute.
ge: sanka [-options] <native class> [args...] available options include: -l <native library list> dynamic libraries to load, separated by : or ; -D<property>=<value> set a system property -version print the Coherence version -? print this help message <native class> the fully qualified class. For example, coherence::net::CacheFactory
The specified libraries must either be accessible from the operating system library path (
PATH,
LD_LIBRARY_PATH,
DYLD_LIBRARY_PATH), or they may be specified with an absolute or relative path. Library names may also leave off any operating specific prefix or suffix. For instance the UNIX
libfoo.so or Windows
foo.dll can be specified simply as
foo. The Coherence shared library which the application was linked against must be accessible from the system's library path as well.
Several utility executables classes are included in the Coherence shared library:
coherence::net::CacheFactory runs the Coherence C++ console
coherence::lang::SystemClassLoader prints out the registered managed classes
coherence::io::pof::SystemPofContext prints out the registered POF types
The later two executables can be optionally supplied with shared libraries to inspect, in which case they output the registration which exists in the supplied library rather then all registrations.
Note:The console which was formerly shipped as an example, is now shipped as a built-in executable class.
Applications can of course still be made executable in the traditional C++ means using a global main function. If desired you can make your own classes executable using Sanka as well. The following is a simple example of an executable class:
#include "coherence/lang.ns" COH_OPEN_NAMESPACE2(my,test) using namespace coherence::lang; class Echo : public class_spec<Echo> { friend class factory<Echo>; public: static void main(ObjectArray::View vasArg) { for (size32_t i = 0, c = vasArg->length; i < c; ++i) { std::cout << vasArg[i] << std::endl; } } }; COH_REGISTER_EXECUTABLE_CLASS(Echo); // must appear in .cpp COH_CLOSE_NAMESPACE2
As you can see the specified class must have been registered as an
ExecutableClass and have a
main method matching the following signature:
static void main(ObjectArray::View)
The supplied
ObjectArray parameter is an array of
String::View objects corresponding to the command-line arguments which followed the executable class name.
When linked into a shared library, for instance
libecho.so or
echo.dll, the
Echo class can be run as follows:
> sanka -l echo my::test::Echo Hello World Hello World
The Coherence examples directory includes a helper script
buildlib for building simple shared libraries.
|
https://docs.oracle.com/cd/E24290_01/coh.371/e22839/cpp_objectmod.htm
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
Class to provide aligned memory allocate functionality. More...
#include <OgreAlignedAllocator.h>
Class to provide aligned memory allocate functionality.
Definition at line 63 of file OgreAlignedAllocator.h.
Allocate memory with given alignment.
Referenced by Ogre::StdAlignedAllocPolicy< Alignment >::allocateBytes().
Allocate memory with default platform dependent alignment.
Deallocate memory that allocated by this class.
Referenced by Ogre::StdAlignedAllocPolicy< Alignment >::deallocateBytes().
|
https://www.ogre3d.org/docs/api/1.7/class_ogre_1_1_aligned_memory.html
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
UltraLite.NET applications can be deployed to Windows Mobile and Windows. If you are deploying to Windows Mobile, UltraLite.NET requires the .NET Compact Framework. If deploying to Windows, it requires the .NET Framework. UltraLite.NET also supports ActiveSync synchronization.
The .NET Compact Framework is the Microsoft .NET runtime component for Windows Mobile. It supports several programming languages. You can use either Visual Basic.NET or C# to build applications using UltraLite.NET.
UltraLite.NET provides the following namespace:
iAnywhere.Data.UltraLite This namespace provides an ADO.NET interface to UltraLite. It has the advantage of being built on an industry-standard model and providing a migration path to the SQL Anywhere ADO.NET interface, which is very similar.
iAnywhere.Data.UltraLite namespace (.NET 2.0)
|
http://dcx.sap.com/1100/en/uldotnet_en11/uljavadotnet-intro-s-3752373.html
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
Add, change or delete an environment variable
#include <stdlib.h> int putenv( char *env_name );
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
The putenv() function uses env_name, in the form name=value, to set the environment variable name to value. This function alters name if it exists, or creates a new environment variable.
In either case, env_name becomes part of the environment; subsequent modifications to the string pointed to by env_name affect the environment.
The space for environment names and their values is limited. Consequently, putenv() can fail when there's insufficient space remaining to store an additional value.
putenv( strdup( buffer ) );
The following gets the string currently assigned to INCLUDE and displays it, assigns a new value to it, gets and displays it, and then removes INCLUDE from the environment.
#include <stdio.h> #include <stdlib.h> int main( void ) { char *path; path = getenv( "INCLUDE" ); if( path != NULL ) { printf( "INCLUDE=%s\n", path ); } if( putenv( "INCLUDE=/src/include" ) != 0 ) { printf( "putenv() failed setting INCLUDE\n" ); return EXIT_FAILURE; } path = getenv( "INCLUDE" ); if( path != NULL ) { printf( "INCLUDE=%s\n", path ); } unsetenv( "INCLUDE" ); return EXIT_SUCCESS; }
This program produces the following output:
INCLUDE=/usr/nto/include INCLUDE=/src/include
|
http://www.qnx.com/developers/docs/6.6.0.update/com.qnx.doc.neutrino.lib_ref/topic/p/putenv.html
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
Opened 8 years ago
Closed 8 years ago
#13531 closed (invalid)
django.core.files.File throws an exception on _get_size
Description
The File _get_size() function incorrectly uses self.file.name as a path.
When using a custom FileStorage like storage object, the files name and actual path may be different. This will trigger an AttributeError even though the file actually exists on disk and is readable.
Attachments (1)
Change History (6)
comment:1 Changed 8 years ago by
comment:2 Changed 8 years ago by
If the file does not have a name (for example, is a StringIO object), the AttributeError is at line 39 in django/core/files/base.py (svn trunk):
def _get_size(self): if not hasattr(self, '_size'): if hasattr(self.file, 'size'): self._size = self.file.size elif os.path.exists(self.file.name): #<-----------HERE self._size = os.path.getsize(self.file.name) else: raise AttributeError("Unable to determine the file's size.") return self._size
Changed 8 years ago by
comment:3 Changed 8 years ago by
The attached patch shows one way to find the size, though now I think about it, it is broken, because it leaves the file in a different state (you really need to find the initial offset with tell(), seek to the end, get size with tell, then seek back to the original place).
comment:4 Changed 8 years ago by
attempting to verify this bug...
comment:5 Changed 8 years ago by
The File class is just a wrapper for local file operations and has nothing to do with a custom file storage. file.name /is/ the path, so if you're using it for something else, then you're probably using it wrong. Every place that it's used in that class it's assumed to be the path (e.g., see the open() method). If you're trying to do something else, then you may need to create your own independent File class and use that in your custom FileStorage instead.
Closing this as invalid again...
I'm going to mark this invalid because I can't work out the actual bug that is being reported.
What attribute is causing an AttributeError? Why? What exact conditions cause the problem to occur? This is one of those situations where some sample code would be very helpful.
I don't deny you're seeing a problem, but there's a limit to what we can do to fix a problem when you don't give us the detail required to reproduce, diagnose and fix the problem. Feel free to reopen if you can provide more specifics.
|
https://code.djangoproject.com/ticket/13531
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
Okay, so my problem is that I'm making a program that identifies whether the user input is a letter, a number between 0-9 or a symbol.
This is what I have so far:
#include <iostream> #include <iomanip> #include <windows.h> #include <string> using namespace std; int main() { char variable; cout << "Enter a letter, symbol or number between 0-9: "; cin >> variable; cout << endl; if(( variable >= '0') && (variable <= '9')) //Identifies if its a number between 0-9 { cout << "You have entered: # " << variable << endl << endl; }else{ (int)variable; if( variable == 'a', 'z') //Identifies if its a letter { cout << "You have entered: " << variable << " à" << endl; } } system("pause"); return 0; }
Now what I need help with is two things. First, how do I identify if the input is a ASCII symbol. And second, since I'm using a variable that is a char, when the user enters a number longer than one digit it only records the first one, and if i change it to int, then I can't test whether its a letter or a ASCII symbol.
Please if anybody could answer ASAP. I really need this for tomorrow.
|
https://www.daniweb.com/programming/software-development/threads/385662/help-fast-please-ascii
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
Reset the state of a pseudo-random number generator
#include <stdlib.h> char *setstate( const char *state );
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.:
After initialization, you can restart a state array at a different point in one of two ways:
A pointer to the previous state array, or NULL if an error occurred.
See initstate().
POSIX 1003.1 XSI
drand48(), initstate(), rand(), random(), srand(), srandom()
|
https://www.qnx.com/developers/docs/6.4.1/neutrino/lib_ref/s/setstate.html
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
Use
@ExistenceChecking to specify how EclipseLink should check to determine if an entity is new or exists.
On
merge() operations, use
@ExistenceChecking to specify if EclipseLink uses only the cache to determine if an object exists, or if the object should be read (from the database or cache). By default the object will be read from the database.
Annotation Elements
Table 2-20 describes this annotation's elements.
Usage
You can specify
@ExistenceChecking on an Entity or MappedSuperclass.
EclipseLink supports the following existence checking types:
ASSUME_EXISTENCE – If the object's primary key does not include
null then it must exist. You may use this option if the application guarantees or does not care about the existence check.
ASSUME_NON_EXISTENCE – Assume that the object does not exist. You may use this option if the application guarantees or does not care about the existence check. This will always force an
INSERT operation.
CHECK_CHACHE – If the object's primary key does not include
null and it is in the cache, then it must exist.
CHECK_DATABASE – Perform a
SELECT on the database.
Examples
Example 2-41 shows how to use this annotation.
Example 2-41 Using @ExistenceChecking Annotation
@Entity @Cache(type=CacheType.HARD_WEAK, expiryTimeOfDay=@TimeOfDay(hour=1)) @ExistenceChecking(ExistenceType.CHECK_DATABASE) public class Employee implements Serializable { ... }
See Also
For more information, see:
"Enhancing Performance" in Solutions Guide for EclispeLink
|
http://www.eclipse.org/eclipselink/documentation/2.4/jpa/extensions/a_existencechecking.htm
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
NAMEsched_setaffinity, sched_getaffinity - set and get a thread's CPU affinity mask
SYNOPSIS
#define _GNU_SOURCE /* See feature_test_macros(7) */ #include <sched.h>
int sched_setaffinity(pid_t pid, size_t cpusetsize, const cpu_set_t *mask);
int sched_getaffinity(pid_t pid, size_t cpusetsize, cpu_set_t *mask);
DESCRIPTIONA thread's CPU affinity mask determines the set of CPUs on which it is eligible to run. On a multiprocessor system, setting the CPU affinity mask can be used to obtain performance benefits. For example, by dedicating calling thread is used. The argument cpusetsize is the length (in bytes) of the data pointed to by mask. Normally this argument would be specified thread according to any restrictions that may be imposed by cpuset cgroups appropriate privileges. The caller needs an effective user ID equal to the real user ID or effective user ID of the thread identified by pid, or it must possess the CAP_SYS_NICE capability in the user namespace of the thread pid.
- ESRCH
- The thread. There are various ways of determining the number of CPUs available on the system, including: inspecting the contents of /proc/cpuinfo; using sysconf(3) to obtain the values of the _SC_NPROCESSORS_CONF and _SC_NPROCESSORS_ONLN parameters; and inspecting the list of CPU directories under /sys/devices/system/cpu/. sched(7) has a description of the Linux scheduling scheme. The affinity mask is().) The isolcpus boot option can be used to isolate one or more CPUs at boot time, so that no processes are scheduled onto those CPUs. Following the use of this boot option, the only way to schedule processes onto the isolated CPUs is via sched_setaffinity() or the cpuset(7) mechanism. For further information, see the kernel source file Documentation/admin-guide/kernel-parameters.txt. As noted in that file, isolcpus is the preferred mechanism of isolating CPUs (versus the alternative of manually setting the CPU affinity of all processes on the system). A child created via fork(2) inherits its parent's CPU affinity mask. The affinity mask is preserved across an execve(2).
C library/kernel differencesThis.
Handling systems with large CPU affinity masksThe underlying system calls (which represent CPU masks as bit masks of type unsigned long *) impose no restriction on the size of the CPU mask. However, the cpu_set_t data type used by glibc has a fixed size of 128 bytes, meaning that the maximum CPU number that can be represented is 1023. If the kernel CPU affinity mask is larger than 1024, then calls of the form:
sched_getaffinity(pid, sizeof(cpu_set_t), &mask); fail with the error EINVAL, the error produced by the underlying system call for the case where the mask size specified in cpusetsize is smaller than the size of the affinity mask used by the kernel. (Depending on the system CPU topology, the kernel affinity mask can be substantially larger than the number of active CPUs in the system.) When working on systems with large kernel CPU affinity masks, one must dynamically allocate the mask argument (see CPU_ALLOC(3)). Currently, the only way to do this is by probing for the size of the required mask using sched_getaffinity() calls with increasing mask sizes (until the call does not fail with the error EINVAL). Be aware that CPU_ALLOC(3) may allocate a slightly larger CPU set than requested (because CPU sets are implemented as bit masks allocated in units of sizeof(long)). Consequently, sched_getaffinity() can set bits beyond the requested allocation size, because the kernel sees a few additional bits. Therefore, the caller should iterate over the bits in the returned set, counting those which are set, and stop upon reaching the value returned by CPU_COUNT(3) (rather than iterating over the number of bits requested to be allocated).
EXAMPLEThe program below creates a child process. The parent and child then each assign themselves to a specified CPU and execute identical loops that consume some CPU time. Before terminating, the parent waits for the child to complete. The program takes three command-line arguments: the CPU number for the parent, the CPU number for the child, and the number of loop iterations that both processes should perform. As the sample runs below demonstrate, the amount of real and CPU time consumed when running the program will depend on intra-core caching effects and whether the processes are using the same CPU. We first employ lscpu(1) to determine that this (x86) system has two cores, each with two CPUs:
$ lscpu | grep -i 'core.*:|socket' Thread(s) per core: 2 Core(s) per socket: 2 Socket(s): 1
We then time the operation of the example program for three cases: both processes running on the same CPU; both processes running on different CPUs on the same core; and both processes running on different CPUs on different cores.
$ time -p ./a.out 0 0 100000000 real 14.75 user 3.02 sys 11.73 $ time -p ./a.out 0 1 100000000 real 11.52 user 3.98 sys 19.06 $ time -p ./a.out 0 3 100000000 real 7.89 user 3.29 sys 12.07
Program source
#define _GNU_SOURCE #include <sched.h> #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <sys/wait.h> #define errExit(msg) do { perror(msg); exit(EXIT_FAILURE); \ } while (0) int main(int argc, char *argv[]) { cpu_set_t set; int parentCPU, childCPU; int nloops, j; if (argc != 4) { fprintf(stderr, "Usage: %s parent-cpu child-cpu num-loops\n", argv[0]); exit(EXIT_FAILURE); } parentCPU = atoi(argv[1]); childCPU = atoi(argv[2]); nloops = atoi(argv[3]); CPU_ZERO(&set); switch (fork()) { case -1: /* Error */ errExit("fork"); case 0: /* Child */ CPU_SET(childCPU, &set); if (sched_setaffinity(getpid(), sizeof(set), &set) == -1) errExit("sched_setaffinity"); for (j = 0; j < nloops; j++) getppid(); exit(EXIT_SUCCESS); default: /* Parent */ CPU_SET(parentCPU, &set); if (sched_setaffinity(getpid(), sizeof(set), &set) == -1) errExit("sched_setaffinity"); for (j = 0; j < nloops; j++) getppid(); wait(NULL); /* Wait for child to terminate */ exit(EXIT_SUCCESS); } }
|
http://jlk.fjfi.cvut.cz/arch/manpages/man/sched_setaffinity.2.en
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
Data replication protocolDownload PDF
Info
- Publication number
- US7571215B2US7571215B2 US09975590 US97559001A US7571215B2 US 7571215 B2 US7571215 B2 US 7571215B2 US 09975590 US09975590 US 09975590 US 97559001 A US97559001 A US 97559001A US 7571215 B2 US7571215 B2 US 7571215B2
- Authority
- US
- Grant status
- Grant
- Patent type
-
- Prior art keywords
- slave
- master
- server
- data
-/305,986, filed Jul. 16, 2001, entitled DATA REPLICATION PROTOCOL, incorporated herein by reference.
The following application is cross-referenced and incorporated herein by reference:
U.S. patent application Ser. No. 90/975,587 entitled “LAYERED ARCHITECTURE FOR DATA REPLICATION”, inventors Dean Bernard Jacobs, Reto Kramer, and Ananthan Bala Srinivasan, filed on Oct. for transferring data. The invention relates more specifically to a system and method for replicating data over a network.
There are several types of distributed processing systems. Generally, a distributed processing system includes a plurality of processing devices, such as two computers coupled through a communication medium. One type of distributed processing system is a client/server network. A client/server network includes at least two processing devices, typically a central server and a client. Additional clients may be coupled to the central server, there may be multiple servers, or the network may include only servers coupled through the communication medium.
In such a network environment, it is often desirable to send applications or information from the central server to a number of workstations and/or other servers. Often, this may involve separate installations on each workstation, or may involve separately pushing a new library of information from the central server to each individual workstation and/or server. These approaches can be time consuming and are an inefficient use of resources. The separate installation of applications on each workstation or server also introduces additional potential sources of error.
Ideally, the sending of information should be both reliable in the face of failures and scalable, so that the process makes efficient use of the network. Conventional solutions generally fail to achieve one or both of these goals. One simple approach is to have a master server individually contact each slave and transfer the data over a point-to-point link, such as a TCP/IP connection. This approach leads to inconsistent copies of the data if one or more slaves are temporarily unreachable, or if the slaves encounter an error in processing the update. At the other extreme are complex distributed agreement protocols, which require considerable cross-talk among the slaves to ensure that all copies of the data are consistent.
The present invention includes a method for replicating data from a master server to at least one slave or managed server, such as may be accomplished on a network. In the method, it may be determined whether the replication should be accomplished in a one or two phase method. If the replication is to be accomplished in a one phase method, a version number may be sent that corresponds to the current state of the data on the master server. This version number may be sent to every slave server on the network, or only a subset of slave servers. The slave servers receiving the version number may then request that a delta be sent from the master. The delta may contain data necessary to update the data on that slave to correspond to the current version number.
If the replication is to be accomplished in a two phase method, a packet of information may be sent from the master to each slave, or a subset of slaves. Those slaves may then respond to the master server whether they can commit the packet of information. If at least some of the slaves can commit the data, the master may signal to those slave that they should process the commit. After processing the commit, those slaves may update to the current version number. If any of the slaves are unable to process the commit, the commit may be aborted.
The present invention provides for the replication of data or other information, such as from a master server, or “administration” server (“Admin server”), to a collection of slave servers, or “managed” servers. This replication can occur over any appropriate network, such as a conventional local area network or ethernet. In one embodiment, a master server owns the original record of all data on the network, to which any updates are to be applied. A copy of the data, together with updates as they occur, can be transmitted to each slave server. One example application involves the distribution of configuration information from an Admin server to a collection of managed servers.
In one system in accordance with the present invention, it may be necessary for a service, such as a Data Replication Service (DRS), to distribute configuration and deployment information from an Admin Server to managed servers in the appropriate domain. Large data items can be distributed over point-to-point connections, such as Transmission Control Protocol (“TCP”), since a multicast protocol like User Datagram Protocol (“UDP”) does not have flow control, and can overwhelm the system. Remote Method Invocation (RMI), Hypertext Transfer Protocol (HTTP), or a similar protocol may be used for point-to-point connections.
Managed servers can also persistently cache data on local disks. Without such caching, an unacceptable amount of time may be required to transfer the necessary data. The ability of the managed servers to cache is important, as it increases the speed of startup by reducing the amount of startup data to be transferred. Caching can also allow startup and/or restart if the Admin Server is unreachable. Restart may be a more attractive option, and it may be the case that the Admin server directs a server to start. Caching, however, can provide the ability to start the domain without the Admin Server being available.
As shown in the domain structure 100 of
Updates to data on the Admin Server can be packaged as incremental deltas between versions. The deltas can contain configuration and/or other information to be changed. It may be preferable to update the configuration while the domain is running, as it may be undesirable to take the system offline. In one embodiment, the configuration changes happen dynamically, as they are pushed out by the Admin Server. Only the changes to the configuration are sent in the deltas, as it may be unnecessary, and unduly cumbersome, to send the full configuration each time.
A protocol in accordance with the present invention integrates two methods for the distribution of updates, although other appropriate methods may be used accordingly. These distribution methods may be referred to as a one-phase method and a two-phase method, and can provide a tradeoff between consistency and scalability. In a one-phase method, which may favor scalability, each slave can obtain and process updates at its own pace. Slaves can get updates from the master at different times, but can commit to the data as soon as it is received. A slave can encounter an error in processing an update, but in the one-phase method this does not prevent other slaves from processing the update.
In a two-phase method in accordance with the present invention, which may favor consistency, the distribution can be “atomic”, in that either all or none of the slaves successfully process the data. There can be separate phases, such as prepare and commit phases, which can allow for a possibility of abort. In the prepare phase, the master can determine whether each slave can take the update. If all slaves indicate that they can accept the update, the new data can be sent to the slaves to be committed in the commit phase. If at least one of the slave servers cannot take the update, the update can be aborted and there may not be a commit. In this case, the managed servers can be informed that they should roll back the prepare and nothing is changed. Such a protocol in accordance with the present invention is reliable, as a slave that is unreachable when an update is committed, in either method, eventually gets the update.
A system in accordance with the present invention can also ensure that a temporarily unavailable server eventually receives all updates. For example, a server may be temporarily isolated from the network, then come back into the network without restarting. Since the server is not restarting, it normally will not check for updates. The server coming back into the network can be accounted for by having the server check periodically for new updates, or by having a master server check periodically to see whether the servers have received the updates.
In one embodiment, a masterserver regularly sends multicast “heartbeats” to the slave servers. Since a multicast approach can be unreliable, it is possible for a slave to miss arbitrary sequences of heartbeats. For instance, a slave server might be temporarily disconnected from the network due to a network partitioning, or the slave server itself might be temporarily unavailable to the network due, causing a heartbeat to be missed. Heartbeats can therefore contain a window of information about recent updates. Such information about previous updates may be used to reduce the amount of network traffic, as explained below.
There can be at least two layers within each master and each slave: a user layer and a system layer (or DRS layer). The user layer can correspond to the user of the data replication system. A DRS layer can correspond to the implementation of the data replication system itself. The interaction of these participants and layers is shown in
As shown in the startup diagram 200 of
registerMaster(DID, verNum, listener)
registerSlave(DID, verNum, listener)
where DID is an identifier taken from knowledge of well-known DIDs and refers to the object of interest, verNum is taken from the local persistent store as the user's current version number, and listener is an object that will handle upcalls from the DRS layer. The upcall can call a method on the listener object. The master can then begin to send heartbeats, or periodic deltas, with the current version number. A container layer 210 is shown, which can include containers adapted to take information from the slave user 204. Examples of possible containers include enterprise Java beans, web interfaces, and J2EE (Java 2 Platform, Enterprise Edition) applications. Other applications and/or components can plug into the container layer 210, such as an administration client 212. Examples of update messaging between the User and DRS layers are shown for the one phase method in
The master DRS layer begins multicasting heartbeats 408, containing the current version number of the data on the master, to the slave DRS layer 410. The slave DRS layer 410 requests the current version number 412 for the slave from the slave user layer 414. The slave user layer 414 then responds 416 to the slave DRS layer 416 with the slave version number. If the slave is in sync, or already is on the current version number, then no further requests may be made until the next update. If the slave is out-of-sync and the slave is in the scope of the update, the slave DRS layer 410 can request a delta 420 from the master DRS layer 406 in order to update the slave to the current version number of the data on the master. The master DRS layer 406 requests 422 that the master user layer 402 create a delta to update the slave. The master user layer 402 then sends the delta 424 to the master DRS layer 406, which forwards the delta 426 and the current version number of the master to the slave DRS layer 410, which sends the delta 426 to the slave user to be committed. The current version number is sent with the delta in case the master has updated since the heartbeat 408 was received by the slave.
The master DRS layer 406 can continue to periodically send a multicast heartbeat containing the version number 408 to the slave server(s). This allows any slave that was unavailable, or unable to receive and process a delta, to determine that it is not on the current version of the data and request a delta 420 at a later time, such as when the slave comes back into the system.
The master DRS layer 506 sends the new delta 508 to the slave DRS layer 510. The slave DRS layer 510 sends a prepare request 512 to the slave user layer 514 for the new delta. The slave user layer 514 then responds 516 to the slave DRS layer 510 whether or not the slave can process the new delta. The slave DRS layer forwards the response 518 to the master DRS layer 506. If the slave cannot process the request because it is out-of-sync, the master DRS layer 506 makes an upcall 520 to the master user layer 502 to create a delta that will bring the slave in sync to commit the delta. The master user layer 502 sends the syncing delta 522 to the master DRS layer, which forwards the syncing delta 524 to the slave DRS layer 510. If the slave is able to process the syncing delta, the slave DRS layer 510 will send a sync response 526 to the master DRS layer 506 that the slave can now process the new delta. If the slave is not able to process the syncing delta, the slave DRS layer 510 will send the appropriate sync response 526 to the master DRS layer 506. The master DRS layer 506 then heartbeats a commit or abort message 528 to the slave DRS layer 510, depending on whether or not the slave responded that it was able to process the new delta. If all slave were able to prepare the delta, for example, the master can heartbeat a commit signal. Otherwise, the master can heartbeat an abort signal. The heartbeats also contains the scope of the update, such that a slave knows whether or not it should process the information contained in the heartbeat.
The slave DRS layer forwards this command 530 to the slave user layer 514, which then commits or aborts the update for the new delta. If the prepare phase was not completed within a timeout value set by the master user layer 502, the master DRS layer 506 can automatically heartbeat an abort 528 to all the slaves. This may occur, for example, when the master DRS layer 506 is unable to contact at least one of the slaves to determine whether that slave is able to process the commit. The timeout value can be set such that the master DRS layer 506 will try to contact the slave for a specified period of time before aborting the update.
For an update in a one-phase method, these heartbeats can cause each slave to request a delta starting from the slave's current version of the data. Such a process is shown in the flowchart of
For an update in a two-phase method, the master can begin with a prepare phase in which it pro-actively sends each slave a delta from the immediately-previous version. Such a process is shown in the flowchart of
A slave can be configured to immediately start and/or restart using cached data, without first getting the current version number from the master. As mentioned above, one protocol in accordance with the present invention allows slaves to persistently cache data on local disks. This caching decreases the time needed for system startup, and improves scalability by reducing the amount of data needing to be transferred. The protocol can improve reliability by allowing slaves to startup and/or restart if the master is unreachable, and may further allow updates to be packaged as incremental deltas between versions. If no cache data exists, the slave can wait for the master or can pull the data itself. If the slave has the cache, it may still not want to start out of sync. Startup time may be decreased if the slave knows to wait.
The protocol can be bilateral, in that a master or slave can take the initiative to transfer data, depending upon the circumstances. For example, a slave can pull a delta from the master during domain startup. When the slave determines it is on a different version than the delta is intended to update, the slave can request a delta from its current version to the current system version. A slave can also pull a delta during one-phase distribution. Here, the system can read the heartbeat, determine that it has missed the update, and request the appropriate delta.
A slave can also pull a delta when needed to recover from exceptional circumstances. Exceptional circumstances can exist, for example, when components of the system are out of sync. When a slave pulls a delta, the delta can be between arbitrary versions of the data. In other words, the delta can be between the current version of the slave and the current version of the system (or domain), no matter how many iterations apart those versions might be. In this embodiment, the availability of a heartbeat and the ability to receive deltas can provide synchronization of the system.
In addition to the ability of a slave to pull a delta, a master can have the ability to push a delta to a slave during two-phase distribution. In one embodiment, these deltas are always between successive versions of the data. This two-phase distribution method can minimize the likelihood of inconsistencies between participants. Slave users can process a prepare as far as possible without exposing the update to clients or making the update impossible to roll back. This can include such tasks as checking the servers for conflicts. If any of the slaves signals an error, such as by sending a “disk full” or “inconsistent configuration” message, the update can be uniformly rolled back.
It is still possible, however, that inconsistencies may arise. For instance, there may be errors in processing a commit, for reasons such as an inability to open a socket. Servers can also commit and expose the update at different times. Because the data cannot reach every managed server at exactly the same time, there can be some rippling effect. The use of multicasting can provide for a small time window, in an attempt to minimize the rippling effect. In one embodiment, a prepared slave will abort if it misses a commit, whether it missed the signal, the master crashed, etc.
A best-effort approach to multicasting can cause a slave server to miss a commit signal. If a master crashes part way through the commit phase, there may be no logging or means for recovery. There may be no way for the master to tell the remaining slaves that they need to commit. Upon abort some slaves may end up committing the data if the version is not properly rolled back. In one embodiment, the remaining slaves could get the update using one-phase distribution. This might happen, for example, when a managed server pulls a delta in response to a heartbeat received from an Admin server. This approach may maintain system scalability, which might be lost if the system tied down distribution in order to avoid any commit or version errors.
Each data item managed by the system can be structured to have a unique, long-lived domain identifier (DID) that is well-known across the domain. A data item can be a large, complex object made up of many components, each relevant to some subset of the servers in the domain. Because these objects can be the units of consistency, it may be desirable to have a few large objects, rather than several tiny objects. As an example, a single data item or object can represent all configuration information for a system, including code files such as a config.xml file or an application-EAR file. A given component in the data item can, for example, be relevant to an individual server as to the number of threads, can be relevant to a cluster as to the deployed services, or can be relevant to the entire domain regarding security certificates. A delta between two versions can consist of new values for some or all of these components. For example, the components may include all enterprise Java beans deployed on members of the domain. A delta may include changes to only a subset of these Java beans.
The “scope” of a delta can refer to the set of all servers with a relevant component in the delta. An Admin server in accordance with the present invention may be able to interpret a configuration change in order to determine the scope of the delta. The DRS system on the master may need to know the scope in order to send the data to the appropriate slaves. It might be a waste of time and resources to send every configuration update to every server, when a master may only need to only touch a subset of servers in each update.
To control distribution, the master user can provide the scope of each update along with the delta between successive versions. A scope may be represented as a set of names, referring to servers and/or clusters, which may be taken from the same namespace within a domain. In one embodiment, the DRS uses a resolver module to map names to addresses. A cluster name can map to the set of addresses of all servers in that cluster. These addresses can be relative, such as to a virtual machine. The resolver can determine whether there is an intervening firewall, and return either an “inside” or “outside” address, relating to whether the server is “inside the firewall” as is known and used in the art. An Admin server or other server can initialize the corresponding resolver with configuration data.
Along with the unique, long-lived domain identifier (DID) for each managed data item, each version of a data item can also have a long-lived version number. Each version number can be unique to an update attempt, such that a server will not improperly update or fail to update due to confusion as to the proper version. Similarly, the version number for an aborted two-phase distribution may not be re-used. The master may be able to produce a delta between two arbitrary versions given just the version numbers. If the master cannot produce such a delta, a complete copy of the data or application may be provided.
It may be desirable to keep the data replication service as generic as possible. A few assumptions may therefore be imposed upon the users of the system. The system may rely on, for example, three primary assumptions:
- the system may include a way to increment a version number
- the system may persistently store the version number on the master as well as the slave
- the system may include a way to compare version numbers and determine equality
These assumptions may be provided by a user-level implementation of a DRS interface, such as an interface “VersionNumber.” Such an interface may allow a user to provide a specific notion and implementation of the version number abstraction, while ensuring that the system has access to the version number attributes. In Java, for example, a VersionNumber interface may be implemented as follows:
A simplistic implementation of this abstraction that a user could provide to the system would be a large, positive integer. The implementation may also ensure that the system can transmit delta information via the network from the master to the slaves, referred to in the art as being “serializable.”
If using the abstraction above, it may be useful to abstract from a notion of the detailed content of a delta at the user level. The system may require no knowledge of the delta information structure, and in fact may not even be able to determine the structure. The implementation of the delta can also be serializable, ensuring that the system can transmit delta version information via the network from the master to the slaves.
It may be desirable to have the master persistently store the copy of record for each data item, along with the appropriate DID and version number. Before beginning a two-phase distribution, the master can persistently store the proposed new version number to ensure that it is not reused, in the event the master fails. A slave can persistently store the latest copy of each relevant data item along with its DID and version number. The slave can also be configured to do the necessary caching, such that the slave may have to get the data or protocol every time. This may not be desirable in all cases, but may be allowed in order to handle certain situations that may arise.
A system in accordance with the present invention may further include concurrence restrictions. For instance, certain operations may not be permitted during a two-phase distribution of an update for a given DID over a given scope. Such operations may include a one- or two-phase update, such as a modification of the membership of the scope on the same DID, over a scope with a non-empty intersection.
In at least one embodiment, the master DRS regularly multicasts heartbeats, or packets of information, to the slave DRS on each server in the domain. For each DID, a heartbeat may contain a window of information about the most recent update(s), including each update version number, the scope of the delta with respect to the previous version, and whether the update was committed or aborted. Information about the current version may always be included. Information about older versions can also be used to minimize the amount of traffic back to the master, and not for correctness or liveness.
With the inclusion of older version information in a delta, the slave can commit that portion of the update it was expecting upon the prepare, and ask for a new delta to handle more recent updates. Information about a given version can be included for at least some fixed, configurable number of heartbeats, although rapid-fire updates may cause the window to increase to an unacceptable size. In another embodiment, information about an older version can be discarded once a master determines that all slaves have received the update.
Multicast heartbeats may have several properties to be taken into consideration. These heartbeats can be asynchronous or “one-way”. As a result, by the time a slave responds to a heartbeat, the master may have advanced to a new state. Further, not all slaves may respond at exactly the same time. As such, a master can assume that a slave has no knowledge of its state, and can include that which the delta is intended to update. These heartbeats can also be unreliable, as a slave may miss arbitrary sequences of heartbeats. This can again lead to the inclusion of older version information in the heartbeats. In one embodiment, heartbeats are received by a slave in the order they were sent. For example, a slave may not commit version seven until it has committed version six. The server may wait until it receives six, or it may simply throw out six and commit seven. This ordering may eliminate the possibility for confusion that might be created by versions going backwards.
As mentioned above, the domains may also utilize clustering, as shown in
In the domain diagram 300 of
There can also be more than one domain. In this case, there can be nested domains or “syndicates.” Information can be spread to the domain masters by touching each domain master directly, as each domain master can have the ability to push information to the other domain masters. It may, however, be undesirable to multicast to domain masters.
In one-phase distribution, a master user can make a downcall in order to trigger the distribution of an update. Such a downcall can take the form of:
startOnePhase(DID, newVerNum, scope)
where DID is the ID of the data item or object that was updated, newVerNum is the new version number of the object, and scope is the scope to which the update applies. The master DRS may respond by advancing to the new version number, writing the new number to disk, and including the information in subsequent heartbeats.
When a slave DRS receives a heartbeat, it can determine whether it needs a pull by analyzing the window of information relating to recent updates of interest. If the slave's current version number is within the window and the slave is not in the scope of any of the subsequent committed updates, it can simply advance to the latest version number without pulling any data. This process can include the trivial case where the slave is up-to-date. Otherwise, the slave DRS may make a point-to-point call for a delta from the master DRS, or another similar request, which may take the form of:
createDelta(DID, curVerNum)
where curVerNum is the current number of the slave, which will be sent back to the domain master or cluster master. To handle this request, the master DRS may make an upcall, such as createDelta(curVerNum). This upcall may be made through the appropriate listener in order to obtain the delta and the new version number, and return them to the slave DRS. The new version number should be included, as it may have changed since the slave last received the heartbeat. The delta may only be up to the most recently committed update. Any ongoing two-phase updates may be handled through a separate mechanism. The slave DRS may then make an upcall to the slave user, such as commitOnePhase(newVerNum, delta) and then advance to the new version number.
In order to trigger a two-phase update distribution, the master user can make a downcall, such as startTwoPhase(DID, oldVerNum, newVerNum, delta, scope, timeout), where DID is the ID of the data item or object to be updated, oldVerNum is the previous version number, newVerNum is the new version number (one step from the previous version number), delta is the delta between the successive versions to be pushed, scope is the scope of the update, and timeout is the maximum time-to-live for the job. Because the “prepare” and “commit” are synchronous, it may be desirable to set a specific time limit for the job. The previous version number may be included to that a server on a different version number will not take the delta.
The master DRS in one embodiment goes through all servers in the scope and makes a point-to-point call to each slave DRS, such as prepareTwoPhase(DID, oldVerNum, newVerNum, delta, timeout). The slave can then get the appropriate timeout value. Point-to-point protocol can be used where the delta is large, such as a delta that includes binary code. Smaller updates, which may for example include only minor configuration changes such as modifications of cache size, can be done using the one-phase method. This approach can be used because it may be more important that big changes like application additions get to the servers in a consistent fashion. The master can alternatively go to cluster masters, if they exist, and have the cluster masters make the call. Having the master proxy to the cluster masters can improve system scalability.
In one embodiment, each call to a slave or cluster master produces one of four responses, such as “Unreachable”, “OutOfSync”, “Nak”, and “Ack”, which are handled by the master DRS. If the response is “Unreachable”, the server in question cannot be reached and may be queued for retry. If the response is “OutOfSync”, the server may be queued for retry. In the meantime, the server will attempt to sync itself by using a pull from the master, so that it may receive the delta upon retry. If the response is “NoAck”, or no acknowledgment, the job is aborted. This response may be given when the server cannot accept the job. If the response is “Ack”, no action is taken.
In order to prepare the slaves, a master DRS can call a method such as prepareTwoPhase. Upon receiving a “prepare” request from the master DRS, the slave DRS can first check whether its current version number equals the old version number to be updated. If not, the slave can return an “OutOfSync” response. The slave can then pull a delta from the master DRS as if it had just received a heartbeat. Eventually, the master DRS can retry the prepareTwoPhase. This approach may be more simple than having the master push the delta, but may require careful configuration of the master. The configuring of the master may be needed, as waiting too long for a response can cause the job to timeout. Further, not waiting long enough can lead to additional requests getting an “OutOfSync” response. It may be preferable to trigger the retry upon completion of the pull request from the slave.
If the slave is in sync, the slave can make an upcall to the client layer on the slave side, as deep into the server as possible, such as prepareTwoPhase(newVerNum, delta). The resulting “Ack” or “Nak” that is returned can then be sent to the master DRS. If the response was an “Ack”, the slave can go into a special prepared state. If the response was a “Nak”, the slave can flush any record of the update. If it were to be later committed for some reason, the slave can obtain it as a one-phase distribution, which may then fail.
If the master DRS manages to collect an “Ack” from every server within the timeout period, it can make a commit upcall, such as twoPhaseSucceeded(newVerNum), and advance to the new version number. If the master DRS receives a “Nak” from any server, or if the timeout period expires, the master DRS can make an abort upcall, such as twoPhaseFailed(newVerNum, reason), and leave the version number unchanged. Here, reason is an exception, containing a roll-up of any “Nak” responses. In both cases, the abort/commit information can be included in subsequent heartbeats.
At any time, the master DRS can make a cancel downcall, such as cancelTwoPhase(newVerNum). The master DRS can then handle this call by throwing an exception, if the job is not in progress, or acting as if an abort is to occur.
If a prepared slave DRS gets a heartbeat indicating the new version was committed, the slave DRS can make an upcall, such as commitTwoPhase(newVerNum), and advance to the new version number. If a prepared slave DRS instead gets a heartbeat indicating the new version was aborted, the slave can abort the job. The slave can also abort the job when the slave gets a heartbeat where the window has advanced beyond the new version, the slave gets a new prepareTwoPhase call on the same data item, or the slave times out the job. In such a case, the slave can make an upcall, such as abortTwoPhase(newVerNum), and leave the version number unchanged. This is one way to ensure the proper handling of situations such as where a master server fails after the slaves were prepared but before the slaves commit..
|
https://patents.google.com/patent/US7571215B2/en
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
User role mapping in web applicationsDownload PDF
Info
- Publication number
- US8397283B2US8397283B2 US13364251 US201213364251A US8397283B2 US 8397283 B2 US8397283 B2 US 8397283B2 US 13364251 US13364251 US 13364251 US 201213364251 A US201213364251 A US 201213364251A US 8397283 B2 US8397283 B2 US 8397283B2
- Authority
- US
- Grant status
- Grant
- Patent type
-
- Prior art keywords
- user
- proxy
- server
- pagelet
-. patent application Ser. No. 11/917,764 filed on Nov. 2, 2010, which is a continuation of U.S. patent application Ser. No. 11/765,303 filed on Jun. 19, 2007, which claims priority to U.S. Provisional Application No. 60/826,633 filed on Sep. 22, 2006 and to U.S. Provisional Application No. 60/883,398 filed on Jan. 4, 2007, all of which are hereby incorporated by reference in their entirety and.
Web applications have become increasingly popular within the enterprise as a result of their flexibility of deployment and their relatively intuitive interfaces, but web applications present potential problems in the enterprise environment due to security and governance issues.
Some embodiments of the present invention may be useful in reverse proxy and Single Sign On (SSO) environments.
For purposes of this application a reverse proxy can be any system that can do such a reverse mapping. In one embodiment, a reverse proxy is a server that proxies content from a remote web application to an end-user and may or may not modify that content.
No additional or supplemental functionality, such as SSO, should be imputed to the meaning of the term “Reverse Proxy” or “Proxy”.
Supplemental functionalities can include authentication to determine who the user is; authorization to determine if the user has access to the requested resources; transformation functionality to use tags to combine data from different applications, such as web applications and/or rewrite URLs within a response to point to the reverse proxy 104. The functionality can also include gathering analytics and auditing data.
Authentications and authorizations can be part of a SSO system such that all requests through the reverse proxy 104 only require a single sign on.
Authorization can be done by having the reverse proxy 104 handle the mapping of users for a web application to roles. In one embodiment, the web applications can use different roles while the mapping of users to user can be controlled by the reverse proxy 104.
In one embodiment, different types of authentication can be ranked in order of security. The authentication can be used to access the application if the SSO authentication has a security authorization at or above that required by the application.
The use of a reverse proxy can also allow for centralized governance. The reverse proxy can keep a central record of web application usage statistics.
Single sign on can be enabled by having the reverse proxy 104 send credentials (such as user names and passwords) from a credential vault 110 to the application.
In an exemplary case, the rewriting of URLs can be done by checking the URL for a prefix mapped by the reverse proxy and then converting the prefix to point to the reverse proxy. For example, “” can be converted to “”. Adaptive and pagelet tags are discussed below in more detail and are a way to combine functionality from applications.
One embodiment of the present invention concerns the control of passwords in a credential vault. For the purpose of this application a credential vault is any storage location for credentials (such as passwords).
It is desirable that any passwords in the credential vault remain secure. For this reason, these passwords can be encrypted.
One embodiment of the present invention comprises encrypting a number of secondary passwords with a primary password. The secondary passwords can be passwords to applications accessed through a reverse proxy. The primary password can be for a single sign on system such as a reverse proxy. The secondary passwords can be stored in a credential vault 202. An encrypted secondary password can be decrypted from the credential vault using the primary password and provided to an application.
The primary password can be a password to a reverse proxy system or SSO system. The secondary passwords can be passwords to remote web applications. The secondary passwords can be encrypted with a function, such as a hash, of the primary password. A fixed string can be encrypted and stored in the credential vault in the same manner. This encrypted fixed string can be used to test to see if the primary password has changed since the secondary passwords and fixed string have been encrypted. The user can be prompted to input the new and old primary passwords, if the primary password has been changed. The secondary password can be decrypted using the old primary password and reencrypted using the new primary password.
- Ehash(primary password)(secondary password)
and the secondary password can be reconstructed using:
- Dhash(primary password) [Ehash(primary password)(secondary password)]
- Where Ex(y) is the encryption of y using key x, and DX(y) is the decryption of y using key x. The key can be a hash, or any other function, of the primary password. Any type of encryption can be used.
Dhash(new primary password) [Ehash(old primary password)(secondary password)]≠secondary password
To avoid sending the wrong secondary password to an application a known string can be encrypted and a test can be done, If:
Dhash(new primary password) [Ehash(old primary password)(known string)]≠known string
Then the user can be prompted to input the old and new password with a page 220. The encrypted secondary passwords can then be decrypted with the old primary password then re-encrypted with the new password.
This system avoids the problems of an admin user easily decrypting the credentials of a user. Anything that the system knows, the admin user will generally be able to know as well. For example, if the security system is configured to mutate all stored passwords with the same key, an admin user, who has access to the box running the security system, can decompile the program and figure out the encryption method. The admin user can then apply that decryption method to every password in the credential vault.
Since the credential vault password is encrypted with a user's password or a hash of the password, an admin won't have access to a way to decrypt the secondary password. If a user's primary password changes, all credential vault passwords are no longer retrievable.
In one embodiment, whenever a user enters a password into an Authenticator, the Authenticator can pass the password to Security Service for validation. The Security Service can use this password (or a one way hash of this password) to encrypt any passwords that get stored in the credential vault. The user's password can be a symmetric key which need not be persisted anywhere.
The user's password can also be used to encrypt a static string, such as “BOJAMBLES”, which is known to Security Service. This encrypted String can be persisted to the database.
The next time the user logs in to the reverse proxy server, this password can again be sent to Security Service. If Security Service can log in to the back-end repository with this password, it can do another check. Security Service can use the latest sent password to decrypt the BOJAMBLES string. If the decrypted value is indeed BOJAMBLES, Security Service knows that the user's password has not changed. The security service can now use this password to decrypt every password in User1's credential vault using the last sent password. User1 now has access to all his credential vault passwords for auto-login with backend apps.
Assume User1 now changes his password to POKEMONRULEZ.
User1 now tries to access a resource and the reverse proxy server can ask for a login (assuming session expired after password change). User1 now logs on with the new password. This password gets sent to Security Service. It can validate the password with a back-end repository, then it can attempt to decrypt BOJAMBLES with POKEMONRULEZ. The security service can then realize that the user's password has changed. The security service can then let the reverse proxy system know that the user's password has changed.
The reverse proxy system can then display a page to the user. This page can say something like: “the security service has detected that your password has changed recently. In order to re-encrypt your credentials, please enter both your new and old password. Otherwise, the security service cannot be able to re-encrypt your credentials and you can be forced to relogin to all your applications”.
If User1 is able to recall his previous password and enters it in to the form, the reverse proxy system can send the two passwords back to the security service. The security service can now be able to decrypt BOJAMBLES with the old password. Once that is validated, the security service can decrypt all of the user's credentials in the vault. The security service can then re-encrypt those passwords with the new password, and also re-encrypt BOJAMBLES with the new password.
Credential acquisition can also be an important part of the credential vault. If a user logs in to the remote web application, we can acquire their password and store it in the credential vault.
Roles and policies can allow the display and access to data in a flexible manner. Users can be mapped to roles and policies to indicate whether users (as a result of one of these roles) have access to a display or other application resource. A description of one system using roles and policies is given in the U.S. patent “System and Method for Maintaining Security in a Distributed Computer Environment” U.S. Pat. No. 6,158,010 which is incorporated hereby reference.
Requiring each web application to implement roles and policies can complicate the development of these web applications. One embodiment of the present invention is a way for web applications to use roles without doing the mapping of users to those roles.
As shown in
One embodiment of the present invention is a method comprising maintaining a central store 304 of application role mappings at a reverse proxy server 302; receiving a request for a web application at the reverse proxy server (such as from browser 310); determining the proper user role for the web application at the reverse proxy server 302; and sending the proper user role as part of a HTTP header 312 to the web application 308.
The web application 308 can use the user role without doing an independent mapping of the user to a role. The reverse proxy server can interrogate the web application to determine the set of roles used by the web application. The reverse proxy server can implement policies to determine a user's access to the web application. As described in more detail below, code for the web application can include a tag to cause the reverse proxy system to insert a second web application into the displayed page. This second web application can use independent user roles. The reverse proxy server 302 can look up roles for multiple web applications that are combined into a single presentation to the user.
Administrators can specify which roles the web application support in the administration UI.
In one case, the web application 320 can include a pagelet tag (described below) that cause the proxy server 324 to insert a display from web application 322 into the display for web application 320.
In one embodiment if a user, such as “FrankF”, is unable to access the web application 322 independent of web application 320, the pagelet will not be displayed to the user.
Pagelets can be comparable to portlets in that they both contain user interfaces (UIs) that can be displayed in more than one consuming application. But, in one embodiment, there are significant differences.
Portlets can only be displayed in one type of consuming application—a portal. In fact, most portlets are written using proprietary code which requires that they be consumed in a particular vendor's portal. Pagelets, on the other hand, are a much more general technology. Any application when viewed using the runner reverse-proxy can consume any pagelet.
Portlets often require that the consuming application be written in the same language or on the same platform as the portlet. Pagelets can be invoked using XML, a language and platform independent standard, so pagelets and their consumers can be written in different languages and on different platforms.
Portlets, because of their link to portals, assume a specific environment for resolving security and role questions. Pagelets can generalize the functionality, so that security and roles can be customized for any environment.
Portlets, again due to their link to portals, assume a specific UI paradigm (e.g. view modes). Pagelets require no such constraints. Any standard web technology (i.e. any technology supported by web browsers) can be used to generate any type of UI and that UI will be rendered in the consuming application.
The user roles can be sent to the web applications in an HTTP header so that the web application need not do an independent mapping of the user to a role. The tags can include a namespace and ID.
One embodiment of the present invention comprises determining a pagelet web application from a tag in a first web application and inserting the pagelet web application into the display of a page of the first web application.
A reverse proxy server adapted to receive user request for web application can obtain web application code for the first web application and interpret a tag in the first web application to indicate a pagelet web application. Pagelet web application code, from the pagelet web application and web application code, from the first web application, can be mereged and a combined presentation can be provided.
One embodiment of the present invention is a non-invasive way to insert pagelets into a web page. In some cases, it is desired to not modify the source code of a web page using pagelet tags. For example, the web application may be obtained from a third party or it would be otherwise undesirable or difficult to modify the web application code. In this embodiment, a proxy, such as a reverse proxy, can search for an identifier, such as a name or a code pattern, in the web page and then use this identifier to determine whether to insert the pagelet.
In the example of
The table 422 can also include a location indicator that can indicate to proxy 420 where in the web page to insert the pagelet.
The indication of page and location can be by pattern matching. For example, a search string can be indicated and each page with the string can have the pagelet inserted. In one embodiment, the web page indication can use Boolean operators such as “AND” and “OR”.
Optionally, the table 422 can indicate wrapper code for the pagelet. In the example of
The table 422 can also optionally include attributes that are to be obtained from the page and provided to the pagelet. In the example of
Looking at
In the example of
One embodiment of the present invention comprises determining a pagelet web application by recognizing a particular page in a first web application to indicate a pagelet web application and inserting the pagelet web application into a pre-configured section of a page of the first web application. The first web application page and the location to insert the pagelet web application can be determined either programatically or by specifying a specific page or location directly. This embodiment allows a pagelet web application code to be inserted into a first web application, where the first web application code has not been modified prior to the first web application being proxied.
A reverse proxy server can produce interstitial pages to the user. The interstitial pages need not be generated with the web application code.
The reverse proxy can block access to the web application until the specified interstitial pages have been processed.
In one embodiment, the interstitial page HTML can comes from a separate web application. This allows you to write one web application that can provide interstitial pages to many web applications. Alternately, there can be different interstitial pages for different web applications.
In addition, while interstitial pages usually come before you can access the web application, they can come at different points in the login cycle. Interstitial pages can be provided before logging in, and after logging in, in any combination.
Different interstitial pages can be used for different resources, users, or other conditions.
In one embodiment, at least one interstitial page is used to receive a user password as a warning page and/or a license agreement page. The interstitial page can include user info. In one embodiment, the user need not be signed in to receive the interstitial pages. how the components work in the particular exemplary embodiment. This section merely describes one exemplary way to implement the present invention. Other architectures implementing the methods and systems of the present invention can be done.
In one embodiment, Pagelets can be composed of two basic parts: a front-end pagelet tag and a Pagelet Engine. The pagelet tag can be used to insert a pagelet into HTML. The pagelet tag can then replaced by a reverse proxy server with the content of the pagelet. The Pagelet Engine can be a massively parallel pagelet processing engine that is used to retrieve multiple pagelets from back end resources simultaneously.
Pagelet configuration information can be persisted in an IPersistentPagelet interface.
The Pagelet Tag can be a simple tag that can allow one to specify the pagelet ID and include data to be passed to the pagelet. In one embodiment, there can be two kinds of data passed to the pagelet. The first is tag attributes, and the second is an XML payload. Standard xhtml tag attributes in the pagelet tag (i.e. not pt: attributes) can be passed to the pagelet directly. The XML payload can allow the HTML author of the consuming page to include more complicated data to be passed to the pagelet.
An exemplary pagelet tag can look like this:
Pagelets can be identified by a unique combination of namespace and pagelet name. These values can be constant when pagelets are migrated or moved to different servers. Pagelet names can be unique within a namespace. Namespaces are also known as Pagelet Libraries. Namespaces and pagelets can be concatenated together separated by a colon to provide a unique reference to a pagelet (i.e. financeapp:myvacationhours).
The namespace can be any text string, such as “financeapp” or “Urn://”. The namespace can be a part of the pagelet. Namespaces need not be required to be between parent resources as customers may have several near-identical resources that differ only in security that each contains different, but related pagelets from the same back-end resource or multiple back-end resources. This can require a check for pagelet uniqueness within a namespace whenever new pagelets are created or pagelet namespaces are edited.
A Pagelet Engine can allow the pagelet system to retrieve multiple pagelets simultaneously. In order to retrieve content, a client component can create a Request Chain composed of individual Requests. Each request can contain all of the information necessary to retrieve a single pagelet. This information can include the ID of the pagelet, the back-end resource URL, the pagelet attributes, and the XML payload.
The Request Chain can then send off multiple HTTP requests to retrieve the data, and the data can be available to the client component after all requests have returned or timed out. The data can then be transformed and converted into a Markup Array.
The Pagelet Request Flow can involve two passes through the HTML data. The first pass can collect the pagelet data and initiate content retrieval, while the second pass can convert the standard Adaptive Tags and replace the pagelet tags with the pagelet data.
- 1. User requests page from the Reverse Proxy Server.
- 2. The Reverse Proxy Server can retrieve main page content from remote server.
- 3. Transformer can converts HTML into markup fragments.
- 4. 1st pass through markup fragments can be done to convert URLs and retrieve pagelets and pagelet data.
- 5. Pagelet security can be checked
- 6. A Request Chain can be initiated for all the pagelets simultaneously and wait until all pagelet content is retrieved.
- 7. Return content can be processed for login pages.
- 8. Returned content can be processed for Adaptive Tags in individual tag engines.
- 9. Adaptive Tags in the main page can be processed. Individual pagelet tags can insert the processed pagelet content into the HTML output.
- 10. Transformed and processed HTML content can be returned to the end user.
In one embodiment, pagelet tags work in place. The pagelet tags can be placed in a DHTML area so they can be refreshed individually. In one embodiment, an HTML page with placeholder data for slow pagelets can be returned. The Request Chain can be stored on the session and have it continue to retrieve the slow data in the background. Then, the end users' browser can initiate an AJAX request to the reverse proxy server for the additional pagelet content, which could then be filled in dynamically. This can mitigate the problem of slow pagelets.
The Reverse Proxy System can manage Primary and Resource Authentication, as well as Primary Authorization. The Reverse Proxy System can determine if this user is logged in to the reverse proxy system using the correct authentication method to see this pagelet, whether this user has permission to use this pagelet, and perform Auto-login to remote pagelet servers.
A login sequence for primary pagelet Authentication/Authorization can look something like this:
- 1. User requests page containing pagelets.
- 2. The reverse proxy server checks access on each requested pagelet.
- a. If user is logged in to the resource at the correct authentication level, but still does not have access to a pagelet, that pagelet can be replaced with an “Access Denied” HTML comment.
- b. If user is not logged in to the reverse proxy server, or is logged in at too low an authentication level, add to unauthorized pagelet list.
- 3. Alternately, the unauthorized pagelets can be replaced with login links to upgrade the user's session to the appropriate level.
- a. Authentication module can determine the highest auth level required by an unauthorized pagelet, and attempt to log the user in using that auth method.
- b. If login is successful, the Authentication Module can redirect the user to the original page.
- 4. If there are no unauthorized pagelets, retrieve and display the pagelet content.
An auto-login sequence for pagelet remote servers can look something like this:
- 1. User requests page containing pagelets.
- 2. The reverse proxy server retrieves content for each requested pagelet.
- 3. LoginPageDetector is run on each pagelet response.
- 4. For each pagelet response that contained a login page:
- a. The Auto-login component is used to generate a new pagelet request to handle the login page.
- b. All of the new pagelet requests are sent off in parallel.
- c. Return to step 3 to process responses.
- 5. Pagelet responses (all non-login pages) are transformed and inserted into the output page.
An auto-login sequence for pagelet remote servers that requires the user to fill out a login form can look something like this:
- 1. User requests page containing pagelets.
- 2. The reverse proxy server retrieves content for each requested pagelet.
- 3. LoginPageDetector is run on each pagelet response.
- 4. If a previously processed login page is returned (implying that login failed), the current request is stored on the session and the pagelet login page is displayed to the user.
- 5. The user fills out and posts the form.
- 6. The credential information is retrieved and stored in the credential vault (if enabled).
- 7. The user is redirected to the original page.
Remote pagelet data can be checked for login requests by all Auto-login components (i.e. Form, Basic Auth, etc. . . . ).
In one embodiment, there are two main requirements for Pagelet JavaScript and Style Sheets. The first requirement is that there be a way to put JavaScript and Style Sheets into the Head element of the HTML page. The second requirement is that duplicate .js and .css files only be included once per HTML page.
Pagelet developers can mark their JavaScript & style sheet links to be included in the head using a special JavaScript tag (i.e. <pt:common.includeinhead>). Pagelet consumers can then need to include a special Include JavaScript Here tag (i.e. <pt:common.headincludes>) in the Head element of their HTML page. Pagelet JavaScript from the include-in-head tags can then be included in the output of the Head Includes tag. The Includes tag can also filter out duplicate .js and .css files.
The head includes tag can be optional. If the tag is not present, then JavaScript and cascading style sheets can be added at the end of the head tag in the main page. If the main page does not have a head tag, a head tag can be added at the beginning of the page. This can require the Transformer to identify the head element when parsing the page. The headincludes tag can be used when control over the insertion of the pagelet JavaScript and Security Service within the head element is required. There need be no way to specify the order of includes from various pagelets.
The includeinhead tag can work within a pagelet to add Style Sheets and JavaScript to the main page, but it can also be used directly in a main page. In one embodiment, since the includeinhead tag filters Style Sheets & JavaScript file includes to remove duplicates, a main page developer could include their Style Sheets & JavaScript includes in the includeinhead tag, which would ensure that there are no duplicate files included from the pagelet and main page.
In one embodiment, during In-place Refresh, JavaScript and Style Sheets cannot be inserted into the head element, as it has already been displayed. Therefore, the includeinhead tag does not function in in-place refresh pagelets. In order to facilitate building simple in-place refresh pagelets, the includeinhead tag can be ignored during in-place refresh of a pagelet. This means that a pagelet author can use the includeinhead tag to insert JavaScript libraries into the main page the first time the pagelet is displayed on a page, but that the JavaScript libraries can not be re-included during in-place refresh of the pagelet.
In transparent proxy mode, the transformURL function is only be used on pagelet content; general resource content doesn't need it. In non-transparent proxy mode, transformURL function is needed for all Javascript URLs.
If a page requires transformURL function, then the reverse proxy server code can automatically insert to include the transformation Javascript library, as well as an initialization Javascript statement similar to the PTPortlet constructor in portal.
In-place-refresh libraries: by default, pagelet content need not be set to in-place-refresh; by using an inplacerefresh tag, pagelet developers can turn on in-place refresh (IPR). With IPR turned on, all URLs within pagelet content are prefixed with JavaScript which does in-place refresh.
If IPR is not turned on, they can manually insert the inplacerefresh tag around a URL to transform that URL into in-place-refresh link. (This tag can be applied to URLs in resource content as well).
If a resource content page uses in-place refresh (through tags), or has a pagelet inside it which uses in-place refresh, the reverse proxy server can automatically insert the Javascript includes for in-place refresh on the page. Alternatively, the developer can add an inplacerefreshjslibrary tag on a resource or pagelet to make the reverse proxy server insert the in-place refresh libraries. Then the developer can be able to use the .refresh calls that Scripting Framework libraries provide.
In-place refresh Javascript libraries can also contain Javascript Session Prefs functions in them. So if a resource uses session prefs, it can include the inplacerefreshjslibrary tag to tell the reverse proxy server to include in-place refresh Javascript libraries.
Below is a list of exemplary public Javascript functions that Scripting Framework can support for developers:
- public static function getPageletByName (namespace, id)
- Returns a single portlet, given a namespace and portlet id (pageletname) NOTE: There is no restriction against having multiple pagelets on the same page which have the same namespace (since you can put the same pagelet on the page more than once), so portlet lookup by name in these cases is not guaranteed.
- public static function getPortletByUniqueID(id)
- Returns a single portlet, given a pagelet unique ID which is passed down via CSP.
- public static function getSessionPref(name)
- Get a single session pref
- public static function getSessionPrefs(names)
- Get multiple session prefs
- public static function setSessionPref(name,value)
- Set a single session pref
- public static function setSessionPrefs(hash)
- Set multiple session prefs
- public function clearEvent(eventName,eventNamespace)
- Clears the event listener for an event public function clearRefreshInterval( )
- Sets the refreshInterval of the portlet to 0 and clears any current refresh timers.
- public function deleteSessionPref(name)
- Deletes a single session pref
- public function deleteSessionPrefs(array)
- Deletes multiple session prefs
- public function formGetRefresh(form)
- Requests updated content from the server by submitting a form GET request
- public function formPostRefresh(form)
- Requests updated content from the server by submitting a form POST request
- public function formRefresh(form)
- Requests updated content from the server by submitting a form
- public function getRefreshInterval( )
- Returns the refreshInterval of the pagelet
- public function getRefreshURL( )
- Returns the refresh URL of the pagelet
- public function raiseEvent(eventName,eventArgs,eventNamespace)
- Raise a new event
- public function refresh(url)
- Refresh the portlet content from the server, using URL if provided
- public function refreshOnEvent(eventName,eventNamespace)
- Associate portlet refresh action with a specific event public function
- registerForEvent(eventName,eventCallback,eventNamespace)
- Register to be notified of an event
- public function setInnerHTML(html)
- Sets the innerHTML of the pagelet from a string
- public function setRefreshInterval(refreshInterval,startNewRefreshTimer)
- Sets the refreshInterval of the pagelet
- public function setRefreshURL(url)
- Sets the refresh URL of the pagelet
- private function PTPagelet(namespace, uniqueid, containerID, refreshURL, refreshInterval)
- Constructor that the reverse proxy server can put in for every pagelet
The Javascript library which does only transformation can have the following functions:
- private function transformURL(url)
- Transform a URL to be gatewayed private function PTPagelet(pageletid, containerID, remoteRequestURL, remoteBaseURL, resourcePrefixURL, resourceGatewayPrefixURL)
Pagelet discovery can include retrieving a list of available pagelets and the following data about each pagelet:
- Pagelet name, description, and HTML sample code
- Pagelet parameters (name, description, type, and mandatory)
- Pagelet payload XML Schema URL
- pagelet meta-data (name value pairs)
Pagelet consumer developers can go to an HTML page to find pagelet information. Pagelet developers/administrators can have the option of removing pagelets from this list. In one embodiment, since consuming resource security checks cannot be handled automatically, the list of allowable consumer resources should be included on the page.
In order to simplify the reverse proxy server server, this UI need not be hosted as part of the reverse proxy server, but should rather be combined with the login server. This can be available through a known URL, which can then be exposed through the reverse proxy server if desired. The reverse proxy server should come with a pre-configured resource that point to this known URL, although by default no one should be able to access the resource.
Accessing the Pagelet Catalog UI need not require the reverse proxy server admin user login, although it can be secured via basic auth with a static user name. This security should be easy to disable. The pre-existing resource can be setup to auto-login using the static user name.
This UI can be a simple set of jsp or jsf pages that access the reverse proxy server database and display the pagelet information. Ideally, this UI can also be available to be copied and hosted on a developer website or other location. However, since the pages can be dynamic, that can require the pages to be crawled or manually saved from a browser. The index links can also need to be rewritten to point to static pages.
The UI can include both dynamic pagelets, and the tag libraries included with the reverse proxy server.
In one embodiment, since performance is not crucial on the admin server, the pages need not be cached and can access the database for every page view.
Pagelets can be discoverable programmatically. This can allow an application (such as Holland) to discover which pagelets are available to the current user. Since pagelet security is enforced programmatically, pagelet developers/administrators need not be able to remove their pagelet from this list (via the Publish checkbox). Since the Pagelet Discovery Client can always be running inside the context of a the reverse proxy server resource, the results can only contain pagelets that can be consumed by the current resource.
A simple Java only API in the client can be provided that can allow an application to query for the available pagelets for the current user in the current request. The API can do automatic filtering of the results based on user access and consuming resource restrictions. The API can also allow filtering of the pagelet results based on other criteria, such as last modified. The queries can return either basic information about the pagelet, such as it's name and description, or full information for the pagelets.
The queries can be processed by a simple the reverse proxy server based web service on the Admin UI box. The web service can require a valid the reverse proxy server security token from the client. The web service can query for the pagelets available to the consuming resource, and then evaluate the Security Service security policies for the pagelet parent resources to see which ones are visible to the user in the current request.
In order to provide the correct security filtering, the current request information (i.e. user, IP, time, etc. . . . ) can be passed by the client to the web service. The EDK on the remote resource should already have all of the request headers and information necessary.
The pagelet information can be published to a secured registry for programmatic discovery and IDE integration.
Pagelet Discovery Web Service APIs can be exposed as a client-side API similar to the Embedded Development Kit (EDK). The client can be Java only and a Java Archive (JAR) can be provided as one of the reverse proxy server build artifacts.
The underlying transport can be done by turning the PageletManager query class into a web service using RAT. The web service can return all query information in a single response, rather than sending individual requests for pieces of data. This means that a pagelet query could theoretically be quite expensive. The best practice for querying can be to query for pagelet info for large numbers of pagelets, and then query for the entire pagelet information for an individual pagelet, or a small number of pagelets.
When inserting a pagelet tag into a resource, developers can be able to specify an XML payload within a pagelet tag. That payload can be transferred to the remote tier over HTTP, within the HTTP request for the pagelet content.
The payload can be sent in the body of a POST request—in our opinion, as discussed above, the pros overweigh the cons. Other CSP data can be sent in HTTP headers, as in previous protocols. Non-payload pagelet requests can be sent as GETs; requests with payload can be sent as POSTs.
In order to encourage best practices around pagelets, sample pagelets can demonstrate the following principles/practices:
- How to describe how much screen real-estate a pagelet needs, as well as how the pagelet consumer can request the pagelet to display in a certain size.
- Skinnable using style sheets and can consume a common set of style classes (possibly based on WSRP/JSR168).
- How to include meta-data specifying the URL to an HTML or image prototype to show composite app developers what the pagelets can look like. This prototype can be hosted on the pagelet server.
- How to convert simple portlets to pagelets. This sample code can show, among other things, how to deal with portlet/user preferences that are available in portlets but not in pagelets.
The sample pagelet code can look something like the following, although this is not a definition of the IDK API.
The pagelet components list can include components that are used exclusively during pagelet processing, as well as other modules that are involved in the pagelet processing lifecycle.
The Tag Engine can process both tags and pagelets. The Tag Engine Module can use the Tag Engine shared component to process the tags, and then processes the pagelets. This module can control the overall pagelet flow, utilize other components, and delegate processing to other modules.
The same retriever can be used both for pagelets and regular resource requests. Retriever can know what type of request it is (through an argument passed to it) and may do some things differently (create multiple parallel requests for pagelets vs. only one for resources; different timeout; pass through different parameters to CSP library, etc).
The reverse proxy server can support CSP features both on resources, and on pagelets. The reverse proxy server can support a subset of the portal's CSP features.
There can be a CSP library (not a module) created which can handle adding/processing of CSP headers. This library can be called by retriever can appropriate arguments.
- public void ProcessCSPRequest(IProxyUserSession session, IPTWrappedRequest browserRequest, IOKHttpRequest remoterequest, IRunnerResource resource, ISessionPrefs sessionPrefs, int requestMode, String strConsumingURI, IPagelet pagelet, String strImageserverURI);
- public void ProcessCSPResponse(IProxyUserSession session, IOKHttpRequest remoterequest, IRunnerResource resource, ISessionPrefs sessionPrefs, int requestType);
User Info, Roles, user id, user name, login token can be available off IProxyUserSession interface.
Session Prefs can be available from ISessionPrefs interface. IRunnerResource object can contain a list of user info prefs, session prefs, login token sending info to send to remote tier.
ImageserverURI and Comsuming Resource URI (for pagelets only) can be passed into the CSP library by the retriever. CSP libraries can have different modes: regular (resource); pagelet, and in-place-refresh. The mode can be passed in by retriever.
When pagelet data is retrieved from the remote server, it can be checked for login pages. This could happen the first time a remote server is accessed by a particular user, or if the session times out. The Auto-login Module can identify and process remote login pages for a resource, including pagelets.
The Auto-login Module can run during the onTransformContent phase and process all remote responses for login pages. Since pagelet processing essentially duplicates normal page processing many times on a single page, the Auto-login component can be broken out into separate classes so that it can be used by both the Auto-login Module and the Pagelet processor (TagEngine Module). This can include both form based login and basic auth login.
The Auto-login component can be used to check all pagelet responses for login pages. If login pages are detected, the module can be used to generate a login request for the remote resource. This can then be processed using the Retriever. The response can be checked again for login pages and used in place of the original response once the login cycle is finished.
In order to maintain performance, pagelet responses can be checked for login pages, and then all login pages can be processed in parallel. Even though this is done in parallel, the login cycle for each pagelet could comprise multiple pages, which may mean all pagelets would have to wait until the pagelet with the longest login cycle had finished. A pagelet login is expected to be infrequent so this should not affect normal pagelet usage very much.
In addition, if the credential vault is being used and login fails for a pagelet, the Auto-login Component (& Tag Engine) can display the login page to the user so they can login successfully. The Auto-login Component then needs to capture the login information from the user's request, store it in the credential vault, and then process the originally requested page.
The Pagelet Discovery API can be a standalone web service. It can be deployed to the same embedded Tomcat server as the Admin UI and Login Server. There can be a Java-only client similar to the EDK.
The Pagelet Catalog UI can be a simple set of JAVA Server Pages (JPS) Java Server Faces (JSF) pages that display a list of available pagelets and a detail view for pagelet information. This can be setup so it can easily be configured to be either protected by basic auth with a static username/password or unprotected.
The UI can be a combination of static HTML pages generated by our tagdocs utility, and jsp/jsf pages to view the pagelets. The static tagdocs index can be included as a jsp include into the complete index. The PageletListBean and the PageletViewBean can handle data access using RMS.
The reverse proxy server can provide both its own unique tags, as well as extensions to the common tags provided by the Portal. It can also provide the set of portal tags specified in the PRD. In order to maximize interoperability between pagelets and portlets, the portal can be upgraded to include the new common tags. The portal can also know to ignore the reverse proxy server tags, rather than treating them as an error.
Exemplary Reverse Proxy Server tags can be included in the tag library:
These tags behave similarly to the ptdata tags in the Portal.
The pagelet tag is simply the way to add a pagelet to an HTML page.
This tag can output an HTML element that can display a pop-up page containing the picker. The picker can be a simple list with filter box, similar to the ones used in the reverse proxy server admin UI.
Analytics Tag
This tag may or may not be included in the reverse proxy server, depending on details of the Analytics integration.
Common tags can be shared between the reverse proxy server and portal. They can include new tags (with the reverse proxy server specific implementations) and enhancements to the pre-existing general tag libraries for specific the reverse proxy server needs.
Tags can be used for doing conditionals using tags. A variety of expr tags (such as the roleexpr tag) can store Boolean results for use by the if tag. This can simplify the if tag, and avoid the problem of having to figure out if the expression is an integer expression, a role expression, or something else.
The corresponding if-true and if-false tags do not have any attributes. They are simply displayed or not displayed depending on the results of the can be necessary to enable conditional logic based on how many auth sources or resources are returned (i.e. empty, a single item, or a list).
Variable Tag Enhancement
This tag can be converted for use with the collection tag so that you can create collections of variables, in addition to collections of data objects. This can be useful for the rolesdata tag. The collection tag can also need.
In one embodiment, in the reverse proxy server proxy, URLs for various links in the HTML returned by resources are transformed in the reverse proxy mode, but not for transparent proxy mode. For pagelets, URLs can be rewritten in the transparent proxy mode—Javascript refresh can have the same domain, machine name, and port as the main page, otherwise ugly browser warnings appear. Another reason for rewriting URLs is branding: customers want a consistent URL experience for their users in the URL. (i.e. machine name/host doesn't change).
A “mini-gateway” for URLs inside pagelets can be used, both in transparent and reverse proxy mode. All URLs within can be rewritten with a following scheme, like the following:
- http[s]://includingresourcehost:port/includingresourcepath/PTARGS_pageletresourceid/pageletrelativeurl
For example:
Consuming resource with an external URL: can include a pagelet. Pagelet's parent resource can have internal URL: and its ID in the reverse proxy server DB is 15. Pagelet has internal URL. Within the pagelet, there can be a link to.
User can hit, which includes the discussion pagelet. The link within a pagelet can get rewritten to—15/discussion/viewthread.jsp. This link can be composed of the consuming resource URL followed by the PTARGS, pagelet ID, and the remainder of the link in the pagelet after the pagelet parent internal URL.
Internally, mini-gateway can be a resource mapper which extracts resource ID from the URL and sets the appropriate resource on IntermoduleData. On pagelet URLs, it gets called instead of the regular resource mapper.
Embodiments of the present invention can provide a proxy and SSO solution that interoperates with a portal product. In one embodiment, there can be three major components to: proxy, administration console, and core resource API. These can be called RProxy, Admin Console, and Core API respectively.
In support of the primary components can be the additional, top level features of Auditing, Analytics, and Development Tools.
RProxy can support two modes of operation: reverse and transparent. These two modes can interoperate seamlessly. The end user only needs to be aware of when DNS changes are needed; they do not need to know about the distinction between transparent and reverse proxies.
In reverse proxy mode, RProxy can map resources based on the first part of the destination path in a request. For example, if a user requests proxy.external.com/resourceA/somepage.html then, according to user defined rules, the reverse proxy server map that traffic to resourceA.internal.com/somepage.html. This mode of operation requires content rewriting so that URLs in the HTML returned from a resource point to the reverse proxy server and not to the resource itself. Transparent proxy mode can use DNS configuration such that IP addresses of the desired, external hostnames point to RProxy. In this configuration, RProxy can use the request hostname to determine where to redirect traffic. For example, can have an IP address identical to proxy.external.com, and RProxy can accept a request to and redirects it to the internal address resourceA.internal.com/somepage.html—HTML content rewriting is required in this case. Alternatively, RProxy may map the resource to the internal address 192.168.1.2 and this host is configured with the hostname—in this case HTML content rewriting is unnecessary because the host resourceA is configured using its externally facing DNS name,, which has a external DNS address mapping to the same IP as proxy.external.com. That is, returns URLs that point to itself at (instead of returning URLs that point to an internal hostname).
Finally, these two modes can be used in combination such that has an IP address identical to proxy.external.com and is mapped to resourceA.internal.com/appB. The users should be aware from the UI and from logs that a true transparent proxy case requires adding a new, internal only DNS entry that map to the internal IP address. This applies when the protected resource has an identical machine name to the external DNS name. For example, mail.ford.com can have an external IP address identical to RProxy. The protected resource can have a hostname of mail.ford.com as well. The user can manually add a DNS entry for an arbitrary internal hostname, say mail.internal.ford.com and map it to the internal address, say 192.168.1.2. When they configure an RProxy Resource, they can specify that mail.ford.com maps to mail.internal.ford.com. The machine mail.ford.com can retain its hostname internally and externally (so no URL rewriting is needed), and RProxy can know how to properly redirect the traffic.
The following are general steps for processing a request. When the reverse proxy server proxy receives a request from the browser there can be several things that can to happen:
- 1) the reverse proxy server can determine which remote server this request refers to. For example: request for should be forwarded to the remote server named. This stage is called Resource Mapping.
- 2) Also, the reverse proxy server can determine who the user is and potentially provide a way for the user to identify her. This stage is called Authentication.
- 3) Having mapped the resource and identified the user, the reverse proxy server can now determine if this user has access to the requested resource. This is the 3rd stage we call Authorization.
- 4) Assuming that Authorization stage succeeded, the next thing the reverse proxy server can do is to forward the request to the remote server and get the response. This stage can be called Retrieval.
- 5) Since the reverse proxy server provides application assembly functionality, the response from the remote server can contain some instructions for the reverse proxy server to do something. These are called tags. The reverse proxy server also needs to re-write URLs within the response to make sure they point back to the reverse proxy server and not to the original server. This stage is called Transformation.
- 6) And finally, the reverse proxy server can send the response back to the browser. This stage is called PostProcessing.
These stages can be conceptually the same for any kind of request, in other words, no matter if the reverse proxy server received an upload request or download request, WebDAV request or Secure Sockets Layer (SSL) request—the logical stages that the request has to go through can be the same. On the other hand, the behavior of each of the stages could be different for different kind of requests. For example, transformation stage does not do anything for responses containing binary data (images/word documents/etc). Another example is that URL re-writing rules are different for normal pages and pagelet responses.
Another point to keep in mind is that logic to handle a particular type of request might span multiple stages. For example, a FormPickerModule can responsible for detecting login forms served by remote applications and automatically logging the users into those apps. In order to do this, it can add data to the request (retrieval stage) and examine the date of the response (transformation stage).
RProxy can manage ACLs on Resource Objects to determine if a user (who has already authenticated with RProxy via SSO login) has access to a requested resource. The reverse proxy server uses Role Based Access Control (RBAC), based on users, groups, and roles as defined below.
Furthermore, user customizable rules can be mapped to resources. For example, a rule can be created that says, “If a user is from IP address range 192.168.0.1 to 192.168.0.254 then always allow access.” Or, “if a user is accessing this from 8 am to 5 pm, allow access, otherwise deny.” Custom rules are the most likely use of scripting.
RProxy can accept X.509 (PKI) certificates when establishing SSL connections with remote resources. There need be no certificate management for remote resources. Client certificates between RProxy and the browser user can be supported using SSL/TLS in the web application server itself. We can not support certificate configuration per resource.
Single Sign On can enable a user to login to RProxy once and not login again until their SSO session times out or is explicitly logged out. SSO encompasses every aspect of login, authentication, interactive login pages, experiences, and rules. There are several different kinds of primary authentication (i.e. form, 3rd party SSO, basic auth, etc. . . . ), and different resources can require different kinds of authentication depending on their desired level of security.
In order to authenticate users to RProxy, a login page can be presented that allows the user to input their username, password, and specify an authentication source. Authentication may instead be automatic, as can be the case with Windows Integrated Auth. After authentication, an SSO cookie can be set that indicates the user has a valid login to the system, and the user should not be prompted for login again until this expires or the user can be logged out.
Authentication sources can be portal specific and can supported via a tag that users can put into their login page.
RProxy supports WML login pages through complete replacement of the login form, as described below.
Anonymous access can use the same access and authentication mechanisms and need not be special cased within the code. Anonymous access does not require user authentication. For G5/G6 the reverse proxy server it can be not possible to create new users because the security service can be implemented using the portal.
SSO Login supports kiosk mode: “is this a public computer?” When enabled, this causes cookie deletion under certain conditions (e.g. browser close) that might not otherwise occur. This can be similar to Outlook web access shared computer versus private computer.
SSO Login supports kiosk mode through ssoconfig.xml. Users can disable the usage of cookies by specifying negative timeout values. Specifying a negative timeout value can cause the cookie to expire when the browser closes.
In one embodiment, kiosk mode and cookie expiration may be moved from ssoconfig.xml to the experience definition.
RProxy can allow customers to completely replace the login page UI. This can be accomplished by configuring an experience rule through an administrative UI to use a specified remote login page instead of the default page. This remote page can then be free to provide different experiences for users based on their locale, browser-type (i.e. wml/BlackBerry), requested resource, etc. . . .
Login pages can be hosted on a remote server called the login server and the user can create new, custom login resources and experience definitions that refer to any remote host.
In order for remote login pages to properly display, RProxy can provide access to the original HTTP Request information. This information can be used to determine what kind of page to display.
- HTTP Request Headers—All headers from the original request can need to be forwarded to the remote resource.
- Cookies—All cookies can need to be forwarded to the remote resource
- HTTP Request Data—The remote login resource can need access to the following data from the original HTTP Request: Protocol, Remote Address (IP), Remote Host, Scheme, ServerName, ServerPort, is Secure
- Requested Resource—Which RProxy resource can be being requested/accessed.
Sample Login Pages
The reverse proxy server can ship with sample experience definitions available in both JSP and ASP.NET. Those pages can include login, usage agreement and error pages. These pages can be internationalized.
Any complicated pages can have both a simple version (installed on the reverse proxy server) and a complicated version that can not be active when shipped.
The simple login page can prompt for the username and password. Depending on how many auth sources are available for the current resource, it can either store the auth source as a hidden input or display a drop down for the user to choose an auth source.
The complicated login page can choose one of several login pages based on the user's request (i.e. from what subnet, in what language (this can only be a comment), what type of browser—blackberry).
A guest user can have a distinct identity and may have user profile data. An anonymous user's identity can be unknown, though we may know key information such as their IP address. An anonymous user can not have user profile data or a credential vault. Both anonymous and guest users have session data (and thus, “session prefs”). Guest users need not exist in the reverse proxy server. There need be no way to create a new user in the security service.
When a resource supports anonymous access, no authentication need be necessary. Any user can see the resource using an anonymous proxy user session or as a specific user.
SSO logout occurs when a user clicks on a logout link that can be specified by an URL pattern for a resource. This URL pattern can be configurable through the admin UI. When this URL pattern is matched, the SSO logout procedure can fire. After that procedure is completed, the user can be redirected to the sso logout page URL specified on the experience definition.
There can be two cookies which manage/persist/recreate user sessions in the reverse proxy server: PTSSOCOOKIE and PERSISTEDPTSSOCOOKIE. Both cookies have timeout values that are configurable in ssoconfig.xml. The user can disable the use of both cookies by specifying a negative timeout value. Positive values for the timeout of these cookies can force them to be persisted to the user's harddisk.
PTSSOCOOKIE—This cookie can be designed to be short-lived. The duration of this cookie can be enforced by linking HttpSessions to this object via a relationship managed by the LoginTrackingService. The relationship can be managed by the LoginTrackingService because this is the only object in the reverse proxy server that really manages application-scoped variables.
Because a user's session exists across domains, application-scoped variables can be the only way to track and persist data across domains.
When a user's HttpSession expires (this can occurs when the user is not actively using this httpSession), the death or invalidation of this HttpSession can be intercepted by a session listener called RunnerHttpSessionListener. This session listener can query the LoginTrackingService for all the active HttpSessions bound to this user. It can then remove the HttpSession which just expired. If after that, there are no active HttpSessions left for this user, the user can be removed entirely from the LoginTrackingService and the user's SSO session can have effectively expired. This can make it impossible for a user to reconstruct the reverse proxy server session by sending a PTSSOCOOKIE.
PTSSOCOOKIE can be essentially linked to active HttpSessions. However, the removal of a user from the LoginTrackingService/Map triggered by the lack of active HttpSessions can be a weak form of logout. That means that if a PERSISTEDPTSSOCOOKIE is available (i.e., ssoconfig.xml has a positive value for this cookie's expiration and the user possesses one), the user's session can still be restored if the reverse proxy server receives a PERSISTEDPTSSOCOOKIE. However, if the timeout for PERSISTEDPTSSOCOOKIE is negative, then the removal of a user from the LoginTrackingMap can effectively force a user to log in again for his next reverse proxy server request.
PERSISTEDPTSSOCOOKIE—Cookie can be designed to be a long-term cookie (such as 1 week). The cookie's expiration can be managed in two places: 1) on the cookie itself through the Cookie class, and 2) via the LoginTrackingService. We have redundancy to remove or mitigate the risk of a user being able to hack his cookie and extend the lifetime of his cookie. We achieve this by simply tracking the username and session expiration date on the LoginTrackingService. When a user sends a PERSISTEDPTSSOCOOKIE, that cookie's expiration time must validate against the LoginTrackingService before the server proxy server can recreate the user's session. PERSISTEDPTSSOCOOKIE need not be linked to a user's active HttpSession(s). Therefore, a user may not log into the reverse proxy server for a few days and still be able to use his PERSISTEDPTSSOCOOKIE to rebuild his previous reverse proxy server session. That can be the primary function of this cookie. It can be designed for low-security environments.
URL PATTERN LOGOUT—When an URL pattern is detected as being a SSO logout URL, a user can be strongly logged out. This means that the reverse proxy server can go in and invalidate all known HttpSessions the user has registered. The reverse proxy server can also clear the user out of the LoginTrackingMap, which disables the recreation of a user's session via PTSSOCOOKIE. The reverse proxy server can finally set an expiration date to some time in the past in LoginTrackingService, which can disable the recreation of user session via PERSISTEDSSCOOKIE.
SSO login can perform authentication behind the scenes so that Active Directory, LDAP, and other identity source can be used to verify a username, password and other login credentials. The user can be unaware this is occurring; then merely login to the reverse proxy server.
RProxy can delegate to the Security Service for authentication via an authentication source, such as Active Directory or the portal user database. We have abandoned any thoughts of integrating WebLogic Server or BEA Core Security for the Reverse Proxy Server Springbok.
RProxy can provide intrinsic support for Basic Auth.
Proxy Authenticators
In the PRD these can be referred to as “primary authentication.” These are the authenticators that fire from the proxy to a user's browser. They can be top level object and their authentication level and name can be changed. The order can be Currently, there can be a 1-to-1 mapping of authenticator to matching experience definition.
There can be only one interactive authenticator (for interactive login pages). Aside from the interactive authenticator, the remaining authenticators are “integrated”. This can be legacy terminology. All authenticators, other than the form authenticator, are integrated.
Security Service Login Token Timeout
The Security Service plans to can have login token timeout. RProxy can provide a useful integration story for existing Windows Integrated Auth, Netegrity, Oblix, and other customers that have preexisting SSO solutions. Windows Integrated Auth and Netegrity/Oblix integration can occur via external authenticators.
For WIA, the idea can be to write a web application for IIs and enable WIA for that application. The application itself knows how to receive forwarded request from the reverse proxy server, authenticate the WIA user with the security service to generate a login token, the forward the login token back to the reverse proxy server to perform linegrated authentication to complete SSO login. WIA/ Simple and Protected GSSAPI Negotiation Mechanism(Spnego) request diagram.
Summary of Request Flow
- 1. A user can open a browser and enters in an URL:
- 2. The reverse proxy server can intercept the request. The reverse proxy server can determine that the user doesn't have an access level high enough for /resource1. The reverse proxy server can determine that the only authenticator which can grant that access level can be the Spnego Authenticator. The Spnego Authenticator sets a 302 response to the spnego servlet: 302.
- 3. Browser can follow the 302 and does an HTTP GET
- 4. The reverse proxy server can recognize that the user needs an anonymous session to access /SpnegoServlet. It can create one and fetches the /SpnegoServlet from the internal address of.
- 5. The request to can be handed to Tomcat. Tomcat can pass control to the Security Service Negotiate Filter. The Negotiate Filter can call the Security Service Negotiate Service to do some processing, but that can be left out of the diagram for simplicity. The Security Service Negotiate Service can recognize that this request need not be authorized because there are no Authorization headers on the request. It can respond with a 401 and sets the header WWW-Authenticate.
- 6. The reverse proxy server can pass the Filter's response back to the browser as is.
- 7. The browser can see the WWW-AUTHENTICATE header and, if properly set up correctly, can query SSPI for a security token for the service that returned the header. The SPN can be generally of the form <servicename>/<machinename>. User can query for these SPNs from LDAP.
- 8. SSPI can send the user's Ticket Granting Ticket to Kerberos. This Ticket Granting Ticket was given to the user after he completed his window's login. This Ticket Granting Ticket establishes this user's identity to Kerberos and can be used to retrieve a security token for the SPN which just asked the user to identify himself.
- 9. Kerberos queries Active Directory's LDAP repository for security data for the requested SPN.
- 10. Active Directory validates the user's TGT and returns a service ticket for the requested SPN.
- 11. Kerberos can relay this service ticket back to SSPI
- 12. SSPI can relay this service ticket back to the browser
- 13. The browser can encapsulate this service ticket into a security blob and sends it back as an Authorization header.
- 14. The reverse proxy server can grant anonymous access to internal.bea.com/SpnegoServlet as before, except this time the request has an Authorization header containing the SPNEGO security blob.
- 15. Security Service Negotiate Filter can receive the request from the container. It can call the Security Service Negotiate Service to examine the contents of the Authorization header.
- 16. The Security Service Negotiate Service can parse the SPNEGO security blob and does basic validation (i.e. verifies that this is a valid SPNEGO token). If this validation fails, Security Service Negotiate Service can NOT instantiate a SPNEGO Token object. User can verify this in ptspy. If the token validates, Security Service can call JGSS to extract the user identity from the SPNEGO security blob. Security Service can then call JAAS to log this user into a provider. This provider can be swappable and can be an LDAP provider. This can be replaced in m7 by a Virtual Provider to speed up efficiency and eliminate some extraneous LDAP calls. After the user has been validated against the provider, Security Service can wrap the JAAS subject into a wrapper class called an Identity. This Identity can then be stored on the HttpSession. After that, the Negotiate Service returns control to the container.
- 17. Tomcat can call the SpnegoServlet.
- 18. SpnegoServlet reads the Security Service Identity off the HttpSession. If one is found, the servlet can return security data (username, ssovendortype, etc) in the form of http headers.
- 19. The reverse proxy server can parse the response for the security headers. If they are found, the reverse proxy server can validate this user against Security Service over SOAP. This is not in the diagram. If Security Service validates the user, the reverse proxy server can create an Identity for the user and grant the access level associates with the Spnego Authenticator. It can also flush a 302 to the response buffer to the original resource that the user requested in step 1 (i.e. 302:). It can also send back the reverse proxy server cookie which contains the access level that the user has acquired.
- 20. The browser sees the 302 on the http response and follows it (i.e., issues a HTTP GET:). The browser can also attach the cookie that was attached in the previous step.
- 21. The reverse proxy server can examine the request and can find a cookie with an access level equal to that which is associated with the Spnego Authenticator. The reverse proxy server can authorize the request to /resource1 and retrieve this resource for the user.
Diagram components can be briefly summarized. Further details can follow and are available on this wiki page.
- Security Service/Admin UI—Tomcat instance which serves both Security Service war and Admin UI war.
- Tomcat/RunnerProxy—Tomcat instance which serves Proxy war.
- Tomcat/LoginServer/SPNEGO Integration Servlet/BEA Security Service—Tomcat instance can host the login server war. At the time of this writing, the SPNEGO Integration Servlet can be deployed with this war, along with the BEA Security Service jars. These can be decoupled at a later point in time.
- sso.bat—a short .bat file can be temporarily set java opts to load jaas.conf
- jaas.conf—a short txt file can configure this Tomcat instance to run as a specific SPN. Also points to the SPN's keytab file.
- krb5.ini—a txt file containing information so that this server can know where to find various kerberos services
- Active Directory—Active Directory can be a term that can refer both to Window's
Kerberos Authentication Services and Window's LDAP repository.
- AD User—A user that can be created. This user can be used as a container for a SPN which can run Tomcat.
- ServicePrincipalName(SPN)—An SPN can be an LDAP attribute that can be associated to a container object, such as a user. When a user wishes to authenticate with SPNEGO through their IE browser, the IE browser can query Active Directory and obtain a service ticket for the SPN which can be running Tomcat.
WIA between the browser and RProxy requires sticky sessions or the browser would re-authenticate each time with RProxy. WIA between RProxy and remote resources in HS requires cached connections identified by user, so the SSO cookie must have secure user information that cannot be spoofed.
Remote Delegated SSO Integration Pages
Since many current portal customers use custom SSO integrations for their own in-house SSO solutions, we need to support this in the reverse proxy server. In order to avoid customizations in the core RProxy code, we can use a remote integration framework. Rather than writing code integrations for Netegrity and Oblix, those can be provided as sample integrations using the remote framework.
Customers with Oblix/Netegrity already installed can use their SSO solution to protect the cross-domain login form. This can then forward to a remote resource that can recognize the SSO cookie and return the data to the reverse proxy server.
RProxy can detect that a user needs to be integrated against an SSO solution, the user can be forwarded to a remote servlet specified in the UI. That servlet can manage the interaction with the SSO product and retrieve the SSO credentials. These credentials can then be passed back to RProxy through headers. RProxy cab then create a new user session using those credentials.
Experience Definitions and Rules
An Experience Definition can contain a number of properties that can be used by the reverse proxy server to determine the user's experiences related to the user's primary authentication to the reverse proxy server (authentication source, login page, interstitial pages, etc), before an unauthenticated user can accesse a protected the reverse proxy server resource. An experience definition can be selected based on rules (experience rules), using conditions such as which resource being requested, the IP address of the request origin, and/or other supported condition types.
Each experience rule refers to multiple resources that it can apply to. The experience rule contains at least the following:
- Loging resource: Default Login Resource
- Associated resources: Resource1, Resource2, ResourceN
- Basic Auth, Interactive Login Page.
- Windows Intergrated Authentication
- Internal login endpoint suffix: /loginpage.Jsp
- Internal interstitial endpoint suffix: /interstitialpage.jsp
- Internal error endpoint suffid: /errorpage.jsp
- Post logout external URL:
-
- Maximum interstitial visits: 1
An explanation of each of these fields follows:
Login resource: this can be a normal resource under the covers but it has additional restriction. Its external URL determines where the user can be redirected to in order to begin the login sequence.
Associated resources: these are the resources that the experienced can apply to.
Associated proxy authentications: these are the authenticators that can fire if the experience is selected. The interactive authenticator always fires last.
Internal login endpoint suffix: this suffix can be appended to the external URL prefix of the login resource to determine the loginpage endpoint, which can be where users are redirected at the start of the interactive login.
Internal interstitial endpoint suffix: same deal but this can be for interstitials. Note that if the user wants to make interstitials come firs, they just flip the endpoints.
Internal error endpoints suffix: same, but for errors. These pages only get visited when an error occurs.
Post logout external URL: this URL can be visited after SSO logout occurs. If it's not specified, the user can be redirected to the original requested resource (then the SSO login sequence begins anew).
Maximum interstitial visits: this can be the number of times a user can visit interstitial pages before causing an error. This prevents the user from not clicking the “I agree” in a consent page and visiting if forever, and also prevents accidental loops since the administrator can write customer interstitial pages.
The experience delimitation's external URL login path, /login above, can be reserved by the login resource. This prevents any other resources from mapping to the login path(s). The path “/” alone can be an invalid external URL login path.
Experiences can also have conditions that determine when it can be selected based on time of day, request IP, etc.
The login and interstitial pages can make decisions about what to display to a given user/request (such as the language based on User-Agent, what style of page based on IIP addresses, etc.).
They can also embed tags to be processed like any resource. RProxy can provide the login and error pages with relevant information: the original request headers, requested resource information, et al.
If maximum interstitial visits is 1, there can be not maximum enforced for this experience definition.
Interactive Login Sequence (Login Pages and Interstitials)
In the examples, the primary login hostname can be proxy.runner.com. When the SSO login sequence can be initiated for a request, the following occur:
- 1. An experience definition can be chosen based on the original requested resource, the request itself, and the current user.
- 2. RProxy redirects the user to the external login endpoint:
-
During the login sequence, HTTP headers can be used to communicate information between RProxy and the login/interstitial/error pages. This can be similar to the CSP headers used between the portal and portlets. All relevant request information can be forwarded from the user's browser to the login pages via HTTP headers, such as User-Agent and IP address.
- 3. RProxy returns the login HTML content to the user, rewritten in the same manner as a normal relative resource, e.g. links are rewritten to point to the external URL login path: proxy.runner.com/login and so a post link to the original login page would go to:
-
- a. If the login page is unavailable, a default login page can be returned from a local RProxy file as a fallback. Thus, even if the server hosting the login pages is down, user login can still occur.
- b. If the default is unavailable, an RProxy error page can be returned. This can be the user customizable page at errorpage.jsp or if that cannot be contacted, an error page from a local file, and if that is unavailable a hard coded error.
- 4. The user enters username, password, and authentication source information and submits the login page, which can be forwarded to the mapped internal login URL.
- a. The internal URL receiving the form post or get with query string can be responsible for parsing out the username, password, and other information and then responding to RProxy with empty HTML content but with the headers:
- runner_username=username
- runner_password=password
- (There are other headers, see LoginPageHeader in the code.) This means that for every login, an additional request can be sent to the internal login URL than otherwise might be necessary if RProxy was parsing username/password information from a form submission. The benefit can be that the user customized login page can parse the submission and use whatever fields they need. (This might be a requirement for ASP.NET or JSF components that create form and component IDs in sequence based on the other components on the page and therefore may have submit fields on a page that change.) RProxy detects the login headers and, instead of returning a response to the user's browser, attempts authentication with those credentials.
- b. If authentication fails, the user can be sent to the login page again, and tags or headers can be used to describe the error.
- 5. After authentication succeeds, a user session can be created but marked as attempting interstitials.
- a. If interstitial pages are configured, RProxy redirects the user to the interstitial page endpoint. The user Meta data and other information can be sent in HTTP headers.
-
- RProxy continues to forward user requests that are to:
-
- Until a page returns the HTTP header: runner_interstitial_complete=true. The interstitial sequence ends in error if either the maximum interstitial visits are reached or the header runner_interstitial_complete=false is returned.
- b. If the interstitial fails the user can be sent to the error page.
- 6. When interstitial pages are complete or if none are configured, the user session can be logged in and the user can be redirected to the original requested resource.
SSO sends the following 302 redirects to the browser:
- To a login domain
- e.g. when the requested domain crm.bea.com is not the primary SSO cookie login domain of proxy.plumtree.com
- To a login page after integrated authenticators fail for a resource or if there are none
- To the interstitial page after login page completion
- To the original requested resource after login success
- To an error page when an error occurs
There are several different ways that the reverse proxy server can authenticate the user to the back end-resource:
- None (Authentication disabled)
- Basic/Digest Authentication
- Form based Authentication
- Delegated Authentication (Windows Integrated, Netegrity, Oblix, etc. . . . )
No Authentication
By default, the reverse proxy server can not authenticate the user to the back-end resource. This handles the case of anonymous access back-end resources, as well as the case where a company, due to security concerns, does not want the reverse proxy server to manage user passwords in its Credential Vault.
Basic/Digest Authentication
The reverse proxy server can provide a way to authenticate a user using basic/digest authentication. This involves detecting the authentication challenge header in a resource response and resubmitting the request with the appropriate user credentials. This can be essentially a simpler version of the Form based Authentication solution described below.
The Basic/Digest Authentication for a resource can check the HTTP headers of resource responses looking for authentication challenges. If a challenge is detected, the user's name and password can be supplied to the resource without the user seeing the challenge. If the response comes back again, meaning that an incorrect name or password was supplied, then the challenge can be passed on to the user. Once the user responds to the challenge, the reverse proxy server can add that username and password to the Credential Vault, if it is configured to be used.
In order to setup Basic/Digest Authentication for a resource, the administrator can configure the username and password, similar to the form fields in Form based Authentication. There are three possible ways to fill out these values:
-).
- Constant—this type can be used to specify constant field values. Administrators enter a value for the field and that value can be used for all users in all cases.
Form Based Authentication
Form based Authentication can be the most common form of login for web applications, although many enterprise web applications use some kind of SSO technology.
The reverse server proxy can detect login pages, automatically fill them out and display the final page without bothering the user. If there is a login failure, the reverse proxy server can display the login page to the user and then store the credentials the user fills out for later use. This can be essentially a multi-app “Remember My Password” feature.
Defining a Login Page
Even though we call it login page detection, the reverse proxy servers feature can be a lot more generic than that. The reverse proxy server can detect particular pages and issue GET or POST requests in response to those pages, thus simulating a particular user action.
Login pages are defined on the resource level by the administrator. Administrators can define several login pages. When defining a login page, the Administrator has to enter the following information:
- Login Page URL Pattern—a pattern that describes a set of URLs for a particular login page
- Login Page Text Pattern—a pattern that describes the particular content that identifies a login page. This also requires two Booleans to describe how to use the text pattern: Use for Login Page Detection and User for Action URL.
- Login Page Action URL—a complete URL of the page that can be used to submit the form data.
- Set of form fields that a login form contains. The following information can be specified for each form field:
- Field name—the name of the input element in the HTML form
- Field type—the type of the value. This attribute describes where the reverse proxy server should get the value for the field when doing automatic form submission. Possible values are
-).
- Default—in case when the field has a default value (i.e. <input type=“hidden” name=“query” value=“blah”/>, Administrator can choose Default type to tell the reverse proxy server to extract this default value from the form and use it in the form post.
- Constant—this type can be used to specify constant field values. Administrators enter a value for the field and that value can be used for all users in all cases.
- Field value—depending on the type the value might be ignored (Default type) or refer to the constant that can be sent to the remote server (Constant type) or refer to the name of the user profile or credential vault setting.
In cases when no fields are specified, the GET request to the remote server can be issued as a result of detecting a login page.
It can be important to remember that the main purpose of this feature can be to automate logins into remote applications. In some cases logging in might be a multi-step process. In order to support such cases, the reverse proxy server can allow defining multiple login pages for a single resource.
Login Page Detection
Detecting login pages can be pretty trivial once the user has the right data. RProxy relies on the administrator to enter all the necessary data through the admin UI. Login pages are defined by 3 attributes: Login Page URL Pattern, Login Action URL and Login Page Text Pattern. Out of these 3 attributes only two are used during detection phase. Those are Login Page URL Pattern and Login Page Text Pattern. Login Page URL Pattern can be used to detect login pages based on the URL. For example, you know that the URL for the Plumtree portal's login page always looks like
-?*space=Login*
Notice that Login Page URL Pattern only support * wildcard. It can not support any other wildcards.
Sometimes the URL of the login form is always different (Amazon™). However, the login form always looks the same. This means that it can be possible to detect the login page based on the content of HTML returned from the remote resource. And this can be what Login Page Text Pattern is used for. Login Page Text Pattern can be a regular expression written using Java RegEx rules. When this field is specified, the reverse proxy server can check every page returned from the remote server to see if it matches the specified regular expression. For example:
- <form.*action=\“?(.*/sign-in[^\”]*)\”.*>
The above regular expression can be used to detect login pages on the Amazon™ website. Basically, it looks for the form on the page with action URL that has a word sign-in in it. Notice the parenthesis; they are used to define a group within the regular expression. This means that when the reverse proxy server encounters a login form on the page that looks like this:
- <form name=“sign-in” action=“/gp/flex/sign-in/select.html/102-6407777-8688900?%5Fencoding=UTF8&protocol=https” method=“POST”>
it can extract the URL into a separate group. This means that the call to Matcher.group (0) can return the full <form name=“sign-in” . . . method=“POST”> string and the call to Matcher.group(1) can return only the action URL /gp/flex/sign-in/select.html/ . . . / . . . &protocol=https
This is a very important feature, especially in cases like Amazon™, where the Login Form POST URL can be dynamic. In order to support dynamic Form POST URLs the reverse proxy server can allow an administrator to leave the Login Form Action URL field blank when defining the login page and instead specify a regular expression that would extract the Action URL from the action attribute of the form.
The Login Page Text Pattern can be controlled by two Booleans that specify when it should be used (i.e. for login page detection and/or Action URL extraction).
The class that implements this logic can be called PTLoginFormDetector and can be in the FormPicker project.
Each response can be checked against the entire set of login pages defined for that resource. This means that the more login pages are defined for a resource, the more time the reverse proxy server can need to check every response against all of them. We believe this is a reasonable compromise, but we need to performance test this piece of code.
Since certain form pages only occur after the main login page (i.e. a usage agreement), we could only check for these immediately after another login form occurred, if there was a performance problem with checking for too many login forms.
The reverse proxy server can send authentication data to remote resources, including HTTP headers and cookies. The reverse proxy server can also automatically perform form POSTs in order to submit user login data when remote resources present a login page so that the user does not see this login page. When form POST fails and the user cannot be logged in, they are presented with the remote resource login page.
Login Data Submission
The reverse proxy server can detect a login page it can try to automatically submit login data to the remote server, emulating a user filling out a login form. After we detect the login form we know all the information that we need to submit the data: Login Form Action URL and the list of fields. For each field we know its name, type and value. It can be therefore possible to iterate through the list of fields and generate either a body for the POST or a query string for the GET. Once we have the data and know the Action URL, you can submit the request to the remote server.
When configuring login form info, the administrator can be able to select whether the form can be submitted as a GET or a POST. In a GET request, all of the form fields can be treated as query string arguments, or if there can be no form fields, the action URL can be sent as a plain URL. This feature can allow the reverse proxy server to simulate almost any action a user would take in response to seeing a particular page.
Acquiring a User's Data
In order for the reverse proxy server to automatically log a particular user into a remote application, the reverse proxy server has to know the user's credentials (username, password, etc.) How does the reverse proxy server acquire this information?
When the reverse proxy server tries to generate a POST data to send to the remote server, it can try to go through all of the defined fields and get their values. At that time it can be possible to detect if the values for some of the fields have not yet been entered. When such a situation can be detected, the reverse proxy server can do two things: it can forward the login page to the user and it can set a flag on the user's session that indicates that a login page was just displayed to the user.
Under normal circumstances, the user can fill out the login page as they normally would if the reverse proxy server was not installed. When the user submits the form, the information goes back to the reverse proxy server. At that point the reverse proxy server knows which login form was submitted (because of the flag on the session), it knows the names of the fields that need to be captured (those are the fields that did not have data entered for them) and it knows the data for those fields (since the user just typed all the information in and sent it to the proxy). Having all this information makes it trivial to extract the data from the browser's POST request.
Dealing with Login Failures
What happens when the user's password on the remote system has changed or expired? How does the reverse proxy server detect such cases and acquires new credentials for the user? When a user's credentials change on the remote system, sending old credentials that the reverse proxy server has stored can result in a login failure. Since we are dealing with web applications, login failures are usually dealt with in two ways: either the remote application just shows the login page again (possibly with some additional text) or the remote application shows an error page with a link that says “click here to try again”).
Based on the assumptions laid out above, we can claim that when the same login page can be displayed twice in a row (without displaying any non-login pages in between), then this can be a good indication of a failed login. So, what about a case when an error page is displayed? It can be possible to reduce that case to the previous one by defining this error page as a login form in the reverse proxy server. Since the expected user action can be clicking on a link (GET request), the administrator can just define a new login page in the reverse proxy server's admin UI, leave the set of fields empty, set it to GET, and set the Login Action URL to the URL of the link (we can use a regular expression to extract that info). Voila, we just reduced the case of separate error pages to the case of a repeated login page.
When the reverse proxy server detects a failed login it forwards the login page to the user and then switches to the mode of acquiring credentials.
Delegated Authentication
The reverse proxy server can be able to authenticate a user to a back end resource using the credentials from a 3rd party SSO solution (i.e. Windows Integrated Authentication, Netegrity, Oblix, etc. . . . ). This can most likely involve forwarding the appropriate header to the resource.
In some cases, before forwarding content from the remote server to the browser, the reverse proxy server has to transform the data that it had received from the remote server. There are several types of transformations.
URL Re-Writing
Consider a case when an internal resource with the URL can be mapped through the reverse proxy server to the URL. When the user hits the reverse proxy server URL, her request can be forwarded to and she can be presented with some HTML page. What should happen to the links on that HTML page that point back to the original URL? The user might not have direct network access to mail.plumtree.com and therefore such links should be re-written to point back to.
The reverse proxy server leverages HTMLTransformer code from the portal to provide this functionality. You can learn more about this stage by looking at the code in the PTTransformerModule class. Such re-writing should only be needed in cases when the external URL can be different from the internal URL.
Tag Transformation
The reverse proxy server can be not only a proxy, but also a development platform for the composite application. One of the features that it provides to simplify the development of various applications can be support for adaptive tags. The idea is very simple: The reverse proxy server knows information about users, resources, security, etc. When an application includes a special tag into its HTML output, the reverse proxy server can find those tags and replace them with some user-specific data, or potentially perform some action of behalf of the remote application, etc.
After a response is returned from a remote resource, tag processing occurs. Tag processing also occurs on login and interstitial pages. Identifying tags within the text of the response can be again done by leveraging the HTMLTransformer and Tag Engine code from the portal. The tag processing can be controlled by the PTTagEngineModule class. Tag processing can be implemented using the same Tag Transformation Engine as in the 6.0 Portal. This engine has been componentized so that it can be reused by RProxy.
RProxy serves error pages in various contexts: login failure, maximum login attempts exceeded, permission denied when accessing a resource, et al.
The error page displayed can be chosen based on the resource being accessed and the current experience definition (which contains the URL of the error page). Error pages can not vary depending on the type of error. The remote server can have complete control over the HTML of the error page, and can simply include the error message by adding an RProxy error text tag to the content. In effect, error pages can work the same way as login and interstitial pages.
Error Request Information
In order to properly support customization of Error UI pages, RProxy can provide the remote error pages with certain information. This includes the information necessary for login requests (described earlier—including HTTP Headers, cookies, and request information), as well as the following information:
- Error Type—What kind of error has occurred (i.e. Login, Request, Server, Other)
- Error Code—A code for the specific error that has occurred (i.e. Invalid password, remote server unavailable, no such resource).
- Error Severity—The severity level of the error (i.e. Warning, Error, Fatal).
The actual error message can be included on the remote error page through an error tag. Providing error types and codes to the remote server can allow the error page to make the decision to display the error or not, or to replace it with a custom error message.
The Adaptive Tag Engine from the 6.0 portal can be componentized and reused in the reverse proxy server. This involves several major tasks:
- Remove portal dependencies from codebase.
- Convert Tag Engine to Factory/Interface pattern in case the reverse proxy server needs different engine functionality than the Portal.
- Unify Engine implementation details into a single easy-to-use class.
- Write additional unit tests for the engine.
The reverse proxy server can utilize and maintain a specific set of tags that can be common with the portal tag engine. This can include the existing core and logic tag libraries from the portal, including any upgrades and bug fixes to the tags. There can also be a common tag library which can contain tags thought to be common to all applications that could integrate with the tag engine (currently portal and the reverse proxy server). This library can include error and user info tags. The portal tag libraries can have to be upgraded to match the reverse proxy server in the future.
In addition to providing support for the core engine tags (logic, html, il8n, etc. . . . ), the reverse proxy server can also need new tags to cover concepts only relevant to the reverse proxy server. These include:
- Resource Links (data tags similar to current portal navigation data tags)
- The Reverse Proxy Server Auth Sources (similar to current portal auth source tag, but per resource)
- Pagelets (front end to pagelets, can be discussed in detail below)
- The reverse proxy server Debug Tags for Roles and Experience Definitions (how did you get the current role and/or login page)
- The reverse proxy server Info Tags (Role & User)
- Analytics Tag (Add any analytics event you want)
Paglets are composed of two basic parts: a front-end pagelet tag and a pagelet engine. The pagelet tag can be sued to insert a pagelet into HTML. It can be then replaced by the reverse proxy server with the content of the pagelet. The pagelet engine can be a massive parallel pagelet processing (mp3) engine that can be used to retrieve multiple pagelets from band end resources simultaneously.
Pagelet Tag:
The tag can be a simple tag that can allow a user to specify the pagelet ID and include data to be passed to the pagelet. There can be two kinds of data passed to the pagelet. The first can be tag attributes, and the second can be an XML payload. The standard xhtml tag attributes in the pagelet page (i.e., not pt: attributes can be passed to the pagelet directly. The XML payload allows the HTML authority to include more complicated data to be passed to the pagelet. A pagelet tag can look like this:
The massively Parallele Pagelet Processing (mp3) Engine can allows the pagelet system to retrieve multiple pagelets simultaneously. In order to retrieve content, a client component can create a Request Chain composed of individual Requests. Each request can control all of the information necessary to retrieve a single pagelet. This information includes the ID of the pagelet, the back-enc resource URL>, the pagelet attributes, and the XML payload.
The Request Chain can send off multiple HTTP request to retrieve the data, and the data can be available to the client component after all requests have returned or timed out. The data can then need to be transformed and converted into a Markup Array.
Pagelet Request Flow
The pagelet request flow can involve 2 passes through the HTML data. The pass can collect the apgelet data and initiate content retrieval. While the second pass can convert the standard adaptive tags and replace the pagelet tags with the pagelet data.
- User request page from the reverse proxy server.
- The reverse proxy server retrieves content from the remote server.
- Transformer converts HTML into markup fragments.
- 1st pass through markup fragments can be done to convert URLs and retrieve pagelets and pagelet data.
- MP3 request chain can be initiated for all the pagelets simultaneously and waits until. All pagelet content can be retrieved for all the pagelets simultaneously and waits until all pagelt content can be retrieved.
- Returned content can be processed for adaptive tags in individual tag engines (pagelet tags not allowed).
- Adaptive tags in the main page are processed. Individual pagelet tags insert the processed pagelet content into the HTML output. Transformed and processed HTML content can be returned to the end user.
Pagelet JavaScript and Style Sheets
There are two main requirements for pagelet JavaScript and Style sheets. The first requirement can be that there can be a way to put JavaScript and Style Sheets into the head element of the HTML page. The second requirement can be that duplicate .js and .css files only be included once per HTML page.
Pagelet developers can need to mark their JavaScript and Style sheet links to be included in the head using a special JavaScript tag (i.e. <pt:common.incudeinhead>).pagelt consumers can then need to include a special Include JavaScript Here tag (i.e. <pt:common.headincludes>) in the Head element of their HTML page. Pagelet JavaScript from head tags would then be included in the output of the Head Includes tag. The Includes tag would also filter out duplicate .js and .css files.
Security Model
Two types of security exist in the reverse proxy server. The first can be security enforcement exercised on RProxy consumers, whose action can be to access a particular resource. This security can be managed through resource policies, and dictates whether a user has access to a resource. The second can be internal security exercised on RAdmin administrators, whose action can be to configure RProxy. This security dictates whether which the reverse proxy server constructs can be viewed or configured by a particular administrator.
Proxy Security
The reverse proxy server resource can be protected by its resource policy, which describes the conditions under which a user may be granted access to the respective resource, and the roles associated with those conditions.
Resource Policy API
The following interfaces compose the subsection of the Rules API that can be specific to policies:
IResourceRuleFactory can be the entry point to policies. It can be used to generate an
IResourceRuleService based on the CRE or Security Service implementations.
IResourceRuleFactory:
- IResourceRuleService GetResourceRuleService(POLICY_ENGINE_TYPE)
- enum POLICY_ENGINE_TYPE {CSS, CRE}
IResourceRuleService can be the main entry point to the policy rule manager and evaluator.
IResourceRuleService:
- IResourceRuleManager GetResourceRuleManager( )
- IResourceRuleEngine GetResourceRuleEngine( )
- IResourceConditionTypeManager GetResourceConditionTypeManager( )
IPolicyRuleEngine can be the run-time role evaluation engine.
IPolicyRuleEvaluator
- List<IResourceRuleRole> getRoles(IRunnerIdentity user, IRunnerResource resource, Map context)
IResourceRuleManager administers policies, rules, roles, and conditions.
Creating a policy rule IResourceRule automatically creates the associated IResourceRuleRole and IResourceBackendRole.
IResourceRuleManager: extends IRuleManager
- List<IResourcePolicy> GetPolicies( )
- IResourcePolicy GetPolicy(int policyID)
- List<IResourceRole> GetPolicyRoles( )
- IResourceRole GetPolicyRole(int roleID)
IResourceRoleRule extends IRule
- IResourcePolicy GetPolicy( )
- IGroup[ ] getGroups( )
- void addGroup(IGroup)
- void removeGroup(IGroup)
- IUser[ ] getUsers( )
- void addUser (IUser)
- void removeUser (IUser)
- IResourceRole GetRole( )
IResourceRole
- IResourceRoleRule getRule( )
- IResourceBackendRole getBackendRole( )
- IResourcePolicy getOwningPolicy( )
IResourceBackendRole
- IResourceRole getResourceRole( )
A default IResourcePolicy can be created when each IRunnerResource is created. In Springbok, there is one and only one policy per Resource.
IResourcePolicy
- void addRole(IResourceRole role)
- void removeRole(IResourceRole role)
- IResourceRole[ ] getRoles( )
- IRunnerResource getOwningResource( )
IResourceConditionTypeManager extends IConditionTypeManager. It simply limits the conditions that are retrieved, but adds no other functionality.
Policy Evaluation
When a request comes in to RProxy, the target resource and the incoming subject can be specified in the context Map. “Subject” encapsulates the following elements that describe the context of the request:
- the user's identity
- the environment under which the request can be being made
- the time of day
- the date
- the IP address of the requesting computer
RBAC Security
The policy of the requested resource can be then evaluated, which decides whether this particular request can be granted access in the form of a simple “YES” or “NO” answer.
Security designs can be based on role-based security (Role-Based Access Control, or RBAC). To be clear, what this means is that a resource policy defines one or more roles that can access the resource, and describes what it means for a user to be in each of these roles. Then, an incoming subject can be granted access to the resource if and only if the subject meets the criteria of one of those roles.
This can be in contrast to Access Control List (ACL) security, where control over a resource can be defined by a list of explicit users or groups that can access the resource. The ACL on a resource typically defines what actions can be performed on resource by each group. RBAC can be more easily managed than ACL security when there are many users and groups, which can be true in our case.
For a given resource, a set of security roles can be specified. Each role can be composed of the set of users and groups that can be considered for this role, and a series of rules that describe the conditions under which a subject can be considered to be part of this role.
The rules for a role are a series of conditions that are combined with the Boolean operators “AND” and “OR” and nesting to return a “YES” or “NO” decision on whether a subject can be in the role. Notice, the Boolean operator “NOT” can be unsupported for Springbok for purposes of semantic clarity.
A rule condition can of the following types:
Each resource has an associated policy, and each policy contains a set of defined roles. Each role can be a set of users and groups that can be considered for the role, and a set of rules under which a subject can be considered part of this role.
An incoming request subject can be evaluated against each role for a particular resource, and if the subject can be found to be in one or more roles, the access decision can be “YES” and the set of matching roles can be then passed on to the protected resource as additional information.
Security Service Rule Engine
Security Service provides extensive flexibility in evaluating rules, and requires no external dependencies.
Policy Implementation and Dependencies
Resources policies have two distinct components; policy rule evaluation and policy persistence. On both accounts, policy rules and experience rules have the same structure of complex Boolean operators tying together conditional statements. Valid types of conditions overlap in both cases. The only notable architectural and functional difference can be that top-level experience rules are evaluated sequentially, while top-level resource policy rules are evaluated with Boolean operators.
Clearly, the similarities in resource policies and experience rules beg for a common persistence API and rule engine. Referencing the Experience Rules section of this document, the strategy in persistence can be to construct rules that are stored in elemental form so that they can be modified and retrieved incrementally through a rule builder UI, and keep these rules synchronized with a rule engine.
The proposed API for building rules in the Experience Rules section satisfy the requirements of Resource policies, as does the rule engine interface. However, the rule engine should be provided via Security Service's extensive rule evaluation capabilities rather than Portal 6.0's implementation of the Core Rule Engine.
Security Service Integration Overview
Security Service v1 provides an expressive set of security features, but has some constraints to which the reverse proxy server must work around in order to provide authorization via Security Service. Security Service works in the following way: Security Service provides a set of security services including authentication, authorization, and role mapping. Each of these services communicates through Security Service via a plug point and can be swapped in and out with different implementations of the same service. Because of this, each service can be called a “provider” and default providers come packaged with Security Service for each of the services. For example, the default role mapping provider can be implemented based on the XACML standard.
Providers are swapped via mBeans, and security configuration can be also performed by manipulating mBeans. Run-time security evaluation can be performed via a separate set of interfaces.
For role mapping, configuration may be not straightforward, because the XACML provider does not provide an interface for direct manipulation of XACML constructs. Manipulations are limited to the addition, deletion, and replacement of XACML policies. Policies can be passed in as either raw XACML strings or the Policy object that can be part of the CSS/XACML policy schema, which can be found at com.bea.common.security.xacml.policy (in cssenv.jar).
Common Rules API
Policies and experience rules are both constructed out of potentially nested and complex Boolean expressions, and based on product requirements, also share many of the same basic conditions. Because constructing a rule schema is a significant enough effort, it makes sense to share as much as the effort involved between the two as possible. Experience rules and Resource policies are different in functional aims but are distinctly similar in their form and composition. This has led to an effort to consolidate their design, API, and implementation as much as possible. The benefits of this are:
- elimination of duplicate efforts
- a common API allows for the potential for a shared management UI
- code reuse
The following API defines generic interfaces for representing rules and expressions, including different types of operators and condition types. The rule management API for creating, loading, saving and deleting rules and expressions is also defined as part of the common interfaces. The goal of the common API is to provide a unifying interface for representing both resource policies and experience rules such that the different management consoles (UI) for these different types of rules can use the same API for managing and displaying the rules. In addition, common API encourages code reuses between resource policy and experience rule implementations.
Definitions:
- Expression: a set of conditional clauses separated by operators (AND/OR) that evaluates to either TRUE or FALSE. The most basic expression consists of just a single conditional clause (e.g., [IP-address=192.168.0.1]). Expressions can be nested to form a more complex expression.
- Rule: an expression that is associated with an action, such that when the expression evaluates to TRUE, the action can be chosen/executed. In essence, a rule is an IF-THEN statement, with the expression being the IF part and the action being the THEN part. The action can be producing an object ID (for example, an Experience Definition in the case of Experience Rules) or just a Boolean outcome (for example, access or no-access in the case of Policy Rules).
Rules API
Java Package: com.plumtree.runner.rules
Rule and Expression Interfaces
IExpression:
- Property: int id
- Property: String name
- Property: String description
- Property: Date created
- Property: Date lastModified
- boolean isComplexExpression( )
ISimpleExpression extends IExpression
- Property: int conditionTypeID //(e.g., Accessed Resource, IP address, Time, etc)
- Property: int conditionOperatorTypeID //(e.g., equal, greater-than, between, etc)
- Property: String conditionValueStr
- Property: String conditionValue2Str //(for condition types that require 2 input values, e.g., time condition that requires start time and end time)
IComplexExpression extends IExpression
- Property: int operatorTypeID //(e.g., AND operator, OR operator)
- Property: Set<IExpression> subExpressions( )
- void AddSubExpression(IExpression s)
- void RemoveSubExpression(IExpression s)
An expression is either a simple expression (i.e., the most basic expression containing only a single conditional clause), or a complex expression (i.e., containing sub-expressions joined together by a logical operator).
The following properties are used by a simple expression to specify a conditional clause:
- conditionTypeID, which indicates what type of condition it is, e.g., IP address or time.
- conditionOperator, which indicates the type of equality/range operator (equal, greater than, etc) used in comparing the condition value.
- conditionValue, which is the value to compare the condition against, e.g., 192.168.0.1 for an IP-address condition type.
The following properties are used by a complex expression to construct an expression tree:
- subExpressions, which lists the sub-expressions this expression depends on
- operatorTypeID, which indicates the relation between the sub-expressions, i.e., either the sub-expressions are AND-ed or OR-ed together, or negated (NOT-ed).
IRule:
- Property: IExpression condition
- Property: int actionValue
- Property: int id
- Property: String name
- Property: String description
- Property: Date created
- Property: Date lastModified
By definition, a rule is an expression that has an associated action value with it, such that when the expression evaluates to true, the action value can be used. If we think of nested expressions being a tree of sub-expressions (recursively), the top-level (root) node is the one that can be referenced by a rule (to form the expression for its condition).
Condition Type and Operator Type Interfaces
Examples of condition types can be Requested-Resource, or client-IP-address, time, etc. Given input values, a condition type can evaluate to either TRUE or FALSE. Most condition types take in only a single condition value (e.g., IP address==<value>) but some may take two condition values (e.g., time between <value1> and <value2>).
IConditionType:
- Property: int conditionTypeID
- Property: String name
- EnumSet GetSupportedConditionOperators( )
- int GetConditionValueCount(ConditionOperator op)
- boolean Evaluate(Map context, ConditionOperator op, Object conditionValue)
- boolean Evaluate(Map context, ConditionOperator op, Object conditionValue1, Object conditionValue2)
- Object ConvertValueFromString(String strValue)
- String ConvertValueToString(Object value)
enum ConditionOperator (EQUAL, NOT_EQUAL, GREATER_THAN, LESS_THAN, GREATER_THAN_OR_EQUAL, LESS_THAN_OR_EQUAL, BETWEEN):
- Property: int operatorTypeID
- ConditionOperator valueOf(int operatorTypeID)
enum OperatorLogical (AND, OR, NOT):
- Property: int operatorTypeID
- OperatorLogical valueOf(int operatorTypeID)
- boolean Evaluate(List<Boolean> subExpressions)
Why do we have ConditionOperator and OperatorLogical in separate enums? ConditionOperator is an equality or range operator used in constructing a condition (which consists of a condition type, a condition operator, and one or two condition values), while OperatorLogical is used for constructing nested expressions that are joined together using either AND or OR operators, or negated (NOT). Since we cannot use a ConditionOperator in place of an OperatorLogical, and vice versa, it provides a better type-safe checking to separate the enum.
Rules Management API
Object Manager Interfaces (Management API)
IExpressionManager:
- List<? extends IExpression> GetExpressions( )
- List<? extends IExpression> GetExpressions(Map contextFilter)
- IExpression GetExpression(int expressionID)
- ISimpleExpression CreateSimpleExpression( )
- IComplexExpression CreateComplexExpression( )
- void SaveExpression(IExpression expr)
- void DeleteExpression(IExpression expr)
IRuleManager:
- List<? extends IRule> GetRules( )
- List<? extends IRule> GetRules(Map contextFilter)
- IRule GetRule(int ruleID)
- IRule CreateRule( )
- void SaveRule(IRule rule)
- void DeleteRule(IRule rule)
IConditionTypeManager:
- IConditionType GetConditionType(int conditionTypeID)
- List<? extends IConditionType> GetConditionTypes( )
Rules Architecture Strategy
The strategy for the unified and experience/resource-specific issues is:
- Interfaces
- the core API for managing rules has been consolidated into one unified API
- experience rules and policy rules have specific managers as well
- experiences and policies have separate (non-shared) engines for evaluation
Experience Rules/Definitions
In the reverse proxy server, an Experience Definition is a set of properties that defines a user's experience when logging in to the reverse proxy server and when using the reverse proxy server. Typically, use cases include selecting which login page to use (for the primary authentication), which authentication source to use by default, which error page to display when an error occurs, etc. An experience definition can be selected conditionally based on rules (experience rules), using conditions such as which resource being requested, the IP address of the request origin, and/or other supported condition types.
An administrator can compose rules based on Boolean expressions (using AND/OR operators on conditional clauses). These rules can have priority (ordering) assigned to them and can be processed in order until the first match (i.e., the first rule that evaluates to TRUE). The output of this rule can be used to pick an Experience Definition for the current request.
The scope of the implementations includes the following components:
- Native Rule Service component:
- Provides a basic implementation a rule engine that can evaluate rules and expressions defined in the common Rules API section, directly without relying on any external rule engine.
- Experience Rule component:
- Extends and implements the expression/rule interfaces defined by the common Rules API, specifically for experience rules implementation. Persistence of experience rules and the different condition types supported by experience rules can be implemented as part of this component.
- Experience Definition component:
- Experience definition is the “outcome” of executing experience rules, when the rule matches. The definition consists of a set of properties that can be specific to the reverse proxy server. Persistence of experience definitions is part of this component.
Native Rule Service
This component provides a basic rule engine (rule evaluator) implementation that can evaluate rules and expressions defined in the Common Rules API section, natively (i.e., without delegating the evaluation process to any external engine).
Interfaces
Core Rule Engine Interface (Runtime API)
IRuleEngine:
- int Evaluate(Map context)
Core Rule Engine is the basic implementation of a native (built-in) rule engine. At runtime, the engine can be invoked and executed by calling the Evaluate ( ) method on the rule engine passing in the input environment (context) containing the input data at the time of evaluation (e.g., the URL of the request, etc), which can be used by condition type objects to evaluate the expression.
How the input environment (context) gets populated is specific to the underlying implementation of the condition type object. The common Rules API defines only the interface for IConditionType. The class implementation of the interface is the one that decides what key and value that it needs to store in the input environment.
Native Rule Service Interface (Entry-point API for accessing the rule service)
INativeRuleService:
- IRuleEngine getRuleEvaluator( )
- IExpressionManager getExpressionManager( )
- IRuleManager getRuleManager( )
- IConditionTypeManager getConditionTypeManager( )
- void setRuleEvaluator(IRuleEngine re)
- void setExpressionManager(IExpressionManager em)
- void setRuleManager(IRuleManager rm)
- void setConditionTypeManager(IConditionTypeManager ctm)
INativeRuleFactory:
- INativeRuleService getRuleService( )
The INativeRuleService is the entry point to get the rule-related services utilizing the built-in rule engine. From this, we can get access to the runtime API (IRuleEngine). We can get access to the rule and expression management API (IRuleManager, IExpressionManager). We can get the list of available condition types through the IConditionTypeManager. Each of these modules can be swapped with a different implementation of the same, since the INativeRuleService interface defines not only the getter, but also the setter for these different modules.
Implementation
For the implementation of the rule engine, we re-use and refactor the G6 Portal Experience Rule Engine for native rule evaluation within the reverse proxy server. The IRuleEngine implementation can be the refactored engine from G6 Portal. The IExpressionManager/IRuleManager implementation can use Hibernate-backed. We can implement such an IExpressionManager/IRuleManager within the Native Rule Service itself, or delegate those object manager implementations to the downstream component (upper layer) that uses and builds on top of the Native Rule Service (so that the downstream component can have more flexibility, for storing extra attributes in rules/expressions, or using different persistence mechanisms, etc).
Other consideration: Security Service-Based Rule Service
There is another option that we have considered for the rule service implementation that does not require a built-in core rule engine. For this option, both the runtime rule evaluation and the admin-time rule management rely on the XACML provider in Security Service. The rule engine implementation can delegate the actual work to the Security Service XACML provider. The expression/rule manager can use the MBean interface of the XACML provider to create/retrieve/update/delete rules (as XACML policies). One challenge with this option is that Security Service does not expose its internal/core rule engine directly; instead, its rule engine is wrapped inside an access-control/role mapping service interface, and hence the only way to access its rule engine is by making up some mock access-control requests and storing our generic rules and expressions as access-control policies (written in XACML). We may also need to maintain some meta-data outside Security Service to describe extra information not readily stored in Security Service. This approach seems counter-intuitive if what we focus on is a clean/elegant design for the Rule Service implementation. However, if code sharing and code maintenance is a priority, this should give us a nice way to delegate code development and maintenance of a rule engine to the Security Service team.
We decided, for the purpose of experience rules evaluation, to just use and implement the native rule engine, instead of relying on Security Service.
Experience Rules
Experience Rule component builds on top of the Native Rule Service component, by implementing condition types specific for the reverse proxy server experiences, and implementing IExpression, IRule and object managers (IExpressionManager, IRuleManager, IConditionTypeManager) that know how to load and persist experience expressions, rules, and condition types for the reverse proxy server experience.
Condition Types
Following is the list of condition types supported by the reverse proxy server experiences:
- Requested Resource
- Client IP Address
- Time
- Day of Week
- Date
- Client Browser Type (User-Agent request header)
- Locale (Accept-Language request header)
The client IP address condition type can be used to represent both IPv4 and IPv6 IP addresses. The comparison method uses regular expression to compare the IP-address string, allowing both exact match and wildcard match on an IP address. The IP-address-to-string conversion internally utilizes HttpServletRequest.getRemoteAddr ( ) method that returns a string representation of the IP-address. Rule administrator can then create a regular expression to match the IP-address string.
The Time condition type can allow range operators (greater than, less than) in comparing with a condition value, and use the server's time zone when displayed on the UI but can be converted to use UTC time zone when stored in the database.
The list of available condition types for the reverse proxy server experiences at start-up can be predefined (“hard-coded”) or automatically discovered. Although we do not aim for extensibility (of allowing custom condition types) for the reverse proxy server v.1.0, we can design the architecture for discovering the available condition types to be flexible, and in a way that does not prevent customers from writing their own custom conditions in the later version of the reverse proxy server.
To allow flexible settings for the list of condition types, we can either use an XML config file (based on openconfig, stored in the same directory as the reverse proxy server mainconfig.xml), or use a database table and seed the table during installation of the reverse proxy server. The setting contains a list of Java class names that implement the IConditionType interface that can be loaded and used by experience rules.
Populating Input Environment for Condition Type Evaluation
For an expression to evaluate to either TRUE or FALSE, e.g., (Client-IP-Address=192.168.*), we need to be able to obtain the client's IP address from the HTTP request object, and somehow pass that information down to the rule engine, which in turn delegates the evaluation to the corresponding condition type object, which can evaluate the input against the defined condition value (i.e., 192.168.*).
The mechanism that we use to pass that information down is through a context map that we call input environment. Before the rule engine evaluates the rules, we need to populate the input environment using the data from the HTTP request and other information. Each of the condition type class implementations needs to implement this method with the actual code that extracts the information from the passed in parameters (HTTP request object and the reverse proxy server resource) and put the necessary information (needed by its runtime evaluation) into the input environment context map.
Condition Type UI Renderers
Since different condition types may have a difference UI, e.g., a resource picker for Requested-Resource condition, a date/time picker for date/time conditions, a set of 4 small boxes for IP-Address, we can employ a UI Renderer mechanism for rendering different condition types (for creation of expressions in the Admin UI).
UI Renderer mechanism can be essential when we need to support extensibility later on, allowing customers to create their own condition type and plug that in to the Experience Rules. By defining the interface for the condition type as well as the renderer for it, customers may create a custom type and provide a renderer for it such that the custom type can (potentially) be discovered in the reverse proxy server Admin UI for creating experience rules.
The mechanism of how a renderer is associated with a condition type can be scoped out in the reverse proxy server Admin UI project.
API/Classes
Java Packages:
- com.plumtree.runner.experience (for the interfaces)
- com.plumtree.runner.persistence.experience (for Persistent* classes)
- com.plumtree.runner.impl.experience (for the rest of the classes)
IExperienceExpression extends IExpression, IPersistentObject
ISimpleExperienceExpression extends ISimpleExpression, IExperienceExpression
IComplexExperienceExpression extends IComplexExpression, IExperienceExpression
IExperienceRule extends IRule, IPersistentObject:
- Property: int priority //(priority/positioning of the rule in relation to other rules)
PersistentExperienceExpression extends PersistentObject implements ISimpleExperienceExpression, IComplexExperienceExpression
PersistentExperienceRule extends PersistentObject implements IExperienceRule
ConditionTypeBase implements IConditionType:
- void PopulateContext (Map context, HttpServletRequest request, IRunnerResource requestedResource)
ConditionTypeResource extends ConditionTypeBase
ConditionTypeClientIP extends ConditionTypeBase
ConditionTypeTime extends ConditionTypeBase
ConditionTypeWeekday extends ConditionTypeBase
ConditionTypeDate extends ConditionTypeBase
ConditionTypeBrowser extends ConditionTypeBase
ConditionTypeLocale extends ConditionTypeBase
IExperienceRuleManager extends IRuleManager (extending for covariant return types)
ExperienceRuleManager implements IExperienceRuleManager
ExperienceConditionTypeManager implements IConditionTypeManager
Experience Service API (Entry-point API to access experience-related services)
IExperienceFactory:
- IExperienceService GetExperienceService( )
IExperienceService:
- IExperienceDefinition Evaluate(HttpServletRequest request, IRunnerResource requestedResource)
- IConditionTypeManager getConditionTypeManager( )
- IExperienceRuleManager getRuleManager( )
For consistency with the other the reverse proxy server API, we should modify the IPersistenceService to add the following methods:
- IExperienceExpression createExperienceExpression(boolean isComplex)
- IExperienceRule createExperienceRule( )
- IExperienceExpression getExperienceExpression(int expressionID)
- IExperienceRule getExperienceRule(int ruleID)
- IExperienceExpressionSearch createExperienceExpressionSearch( )
- IExperienceRuleSearch createExperienceRuleSearch( )
IExperienceExpressionSearch extends IRunnerSearch:
- List<IExperienceRule> performSearch( )
IExperienceRuleSearch extends IRunnerSearch:
- List<IExperienceRule> performSearch( )
The entry-point API for accessing experience rules can be the IExperienceService and the IPersistenceService, which both can be accessible from the IRunnerApplication.
Experience Definitions
Experience Definition (XD) is a set of properties that can be used by the reverse proxy server to determine the settings for user's experiences, primarily related to the user's primary authentication to the reverse proxy server (authentication source, login and interstitial pages), before an unauthenticated user accesses a protected the reverse proxy server resource.
An experience definition is the outcome of running experience rules. Hence, the selection process of which XD to be used for the current request can be made conditional (and customers can define it) based on a number of things.
Properties of Experience Definitions
Each experience definition contains the following properties:
- Login Resource:
- A the reverse proxy server resource that provides the login/interstitial/error pages. This is a normal resource under the cover but it has additional restrictions. Its external URL determines where the user can be redirected to in order to begin the login sequence.
- Pre-login Interstitial Page URL suffix:
- A suffix appended to the external URL prefix of the login resource to determine the pre-login interstitial page endpoint (e.g., a page to display “System Down” message).
- Login Page URL suffix:
- A suffix appended to the external URL prefix of the login resource to determine the login page endpoint, which is where users are redirected at the start of interactive login.
- Post-login Interstitial Page URL suffix:
- A suffix appended to the external URL prefix of the login resource to determine the pre-login interstitial page endpoint (e.g., to display a “User Agreement” page or “Password expires in 10 days” message).
- Error Page URL suffix:
- A suffix for the error page. An error page only gets visited when an error occurs.
- Authentication Sources, and Default Authentication Source
- Authentication Method
- Post Logout Page URL:
- This URL is visited after SSO logout occurs.
- Max Interstitial Visits:
- This is the number of times a user can visit interstitial pages before causing an error. This prevents the user from not clicking the “I agree” in a consent page and visiting it forever, and also prevents accidental loops since the administrator can write custom interstitial pages. Set it to −1, if no maximum should be enforced.
Persistence of Experience Definitions
Hibernate-based persistence for experience definitions should be relatively straightforward. The relevant association mappings in experience definition:
- Many-to-many mapping from Experience Definition to Authentication Sources
- Many-to-one mapping from Experience Definition to Authentication Method (“proxy authenticator”)
The reverse proxy server can obtain the list of available authentication sources from the Security Service. There can not be any DB table (like PTAuthSources in Portal) that stores the list of available authentication sources in the reverse proxy server schema, which the ExperienceDefinitions table can be joined with. Hence, authentication sources mapping in experience definition can use a simple (element) collection mapping.
API/Classes
Java Packages:
- com.plumtree.runner.experience (for the interface)
- com.plumtree.runner.persistence.experience (for Persistent* classes)
IExperienceDefinition extends IPersistentObject
- Property: IRunnerResource loginResource
- Property: String interstitialPagePreLogin
- Property: String loginPage
- Property: String interstitialPagePostLogin
- Property: String errorPage
- Property: List<Integer> authenticationSourceIDs
- Property: Integer defaultAuthenticationSourceID
- Property: IProxyAuthenticator proxyAuthenticator
- Property: String postLogoutPage
- Property: int maxInterstitialVisit
PersistentExperienceDefinition extends PersistentObject implements IExperienceDefinition
IExperienceDefinitionSearch extends IRunnerSearch:
- List<IExperienceDefinition> performSearch( )
ExperienceDefinitionSearch extends PersistentSearch implements
IExperienceDefinitionSearch
Modify PersistenceService to add the following methods:
- IExperienceDefinition createExperienceDefinition( )
- IExperienceDefinition getExperienceDefinition(int expDefID)
- IExperienceDefinitionSearch createExperienceDefinitionSearch( )
Runner-Experience Runtime Integration
This section discusses the runtime integration points of the rule engine, experience rules, and experience definitions into the reverse proxy server's lifecycle of processing stages. Integration points include steps to discover the available condition types for the reverse proxy server experiences, to load experience rules from database, to initialize the rule engine, to populate the input environment map prior to executing rules for the current request, to execute the rule engine, and to integrate the resulted experience definition into the reverse proxy server's primary authentication module, and other places that need to get a handle of current definition (e.g., to redirect to an error page).
Initialization
The following activities can be performed during the initialization of ExperienceFactory/ExperienceService object, which is performed during PTApplication.init( )
- Initialize ExperienceConditionTypeManager, which performs:
- Discovery of available condition types (from reading from a config XML file or from database)
- Instantiation of these condition type objects into a list that it maintains
- Initialize ExperienceRuleManager, which performs:
- The caching of the rules can rely on Hibernate's second-level cache.
- Instantiate a Native Rule Service instance, and set the condition type manager and rule manager to use the ExperienceConditionTypeManager and ExperienceRuleManager initialized in the previous steps.
Rule Evaluation Time
In processing of an incoming request, whenever we need an Experience Definition, we can call the runtime API IExperienceService.Evalute(.). This API can run all the experience rules and return an IExperienceDefinition object. The following activities can be performed during the execution of this API:
- Populate the input environment (context) by iterating the available condition type objects and call PopulateContext( )method on each, passing in the necessary runtime objects needed by the condition (e.g., HTTP request).
- Invoke the rule evaluator's Evaluate( )method, passing in the input environment context generated in the previous step. The rule evaluator can internally check with the rule manager (ExperienceRuleManager in this case) for any new rules recently added (this should be transparent if we use Hibernate), evaluate the experience rules one by one until the first match, and return the object ID of the Experience Definition to be used.
- Retrieve and return the corresponding ExperienceDefinition object.
Currently, the main downstream consumer of experience service is the SSOLoginModule (for the reverse proxy server's primary authentication). In SSOLoginModule.onAuthenticateUser( ) and its Login Manager classes, the experience rules can be evaluated to produce an ExperienceDefinition containing the needed data for authentication method (“proxy authenticator”), as well as login and interstitial pages if an interactive-login authentication method is selected.
The list of authentication sources listed in the current ExperienceDefinition can be used to render the “authentication source” drop-down list, potentially through a the reverse proxy server tag.
The default authentication source information in the current ExperienceDefinition can be used when no authentication source is specified by the user submitting the login form.
It is up to the caller (downstream consumer) of experience service, whether to cache the result of evaluating the rules, or to invoke the rules every time an experience definition is needed. For better performance, the rule evaluation should typically not be invoked more than once per request handling cycle. For requests that do not need an experience definition, the rule evaluation should very well be skipped.
Auditing
Auditing is the examination of records or accounts in order to provide a clearer picture of what was going on at any given point in time. Towards that end, the reverse proxy server auditing module can be responsible for 3 main tasks:
- 1. Audit Recording—Gathering data from the following defined events and storing it in the reverse proxy server DB.
- a. Access Record—(Proxy) Information regarding the accessing of specified resources. (who, which, when, how)
- b. Authorization Configuration Record—(Admin UI) Information regarding any kinds of changes involving authorization policies, including the identity of the changer and the new policy definition.
- c. Resource Configuration Record—(Admin UI) Information regarding any changes made in resource configurations, such as authentication requirement levels.
- 2. Audit Retrieval—Retrieval of queried data, through the Admin UI, from the reverse proxy server DB. The data can be returned as a list of AuditRecord objects. The audit data can also be exported in CSV format to an OutputStream.
- a. The data retrieved can be provided in the following types of reports:
- i. Access Reports
- 1. Resource Access: Given a resource and time period, list all user access instances
- 2. User Activity: Give a user and time period, list all resources accessed
- ii. Authorization Change Report
- 1. Authorization: Given a resource, list all authorization changes over a time period.
- iii. Resource Change Report
- 1. Resource Changes: Given a resource name, list all resource changes over a time period.
- b. The DB tables can be directly accessible to the user. Each audit type can have a table assigned to it. The Authorization Configuration Records and Resource Configuration Records each can also have a separate data table, to store the record-specific properties.
- i. Access Record—ACCESSAUDITRECORDS
- ii. Authorization Configuration Record—AUTHORIZATIONCONFIGAUDITRECORDS, AUTHORIZATIONCONFIGDATA
- iii. Resource Configuration Record—RESOURCECONFIGAUDITRECORDS, RESOURCECONFIGDATA
- 3. Current State of User Access Privileges (CSUAP)—Checking the Proxy server to determine the potential access privileges of any given user with their roles, or all the users who can potentially access a given resource with any of their roles. This can be done through the Admin UI. CSUAP data can not be exported.
- a. Accessibility Reports
- i. User Reach: Given a user, list all resources the user could potentially access, as well as the roles available.
It is NOT a goal of this project to support dependency injection.
Audit Events
These are the 3 types of events to passed over:
- Access Audit Record
- Generated when users access a specific resource that has been flagged for auditing.
Event Properties
- time—DB generated
- user name—unique, fully qualified, includes domain
- user type—the role used by the user to access the resource
- service name—machine name from where the record is generated
- access resource URL—external URL used to access the resource. Size cap of 256 characters.
- client IP address
- primary authentication method
- resource id
Authorization Configuration Record
Generated when any alterations are made regarding authorization policies.
Event Properties
- time—DB generated
- user name—unique, fully qualified, includes domain
- user type—the role used by the user to access the resource
- service name—machine name from where the record is generated
- action type—creation, change, or deletion
- resource id
- properties
- Resource Configuration Record
- Generated when resource configurations are altered.
Event Properties
- time—DB generated
- user name—unique, fully qualified, includes domain
- user type—the role used by the user to access the resource
- service name—machine name from where the record is generated
- action type—creation, change, or deletion
- resource id
- properties
Audit Recording Configurations
Configuring audits can involve changes on specific resource objects. Both authorization configuration audit records and resource configuration audit records are active by default, and cannot be configured.
Configurable on the resource object:
Can be implemented using getAuditFlag( ) and setAuditFlag( )methods in IRunnerResource
- Unique Resource Visit Access flags—record or ignore unique resource visits. Specific to each resource. All flags are off by default.
These changes in audit configuration can be recorded as Resource Configuration Audit records.
API
The top-level class for the Auditing API is the RunnerAuditMgr, which implements IAuditMgr and can contain all the methods for writing and retrieving data to and from the reverse proxy server DB, using Hibernate. All auditing done through Hibernate can be implemented according to Transaction rules.
AuditFactory implements IAuditFactory, and generates the RunnerAuditMgr to be used in the module.
All persistent objects, except AuditRecord which is abstract, are created by the PersistenceService. The persistent objects are:
AccessRecord, AuthorizationConfigRecord, ResourceConfigRecord, AuditRecord, and AuditData.
AuditRecord objects are used by Hibernate to generate table rows. The AuditRecord class is abstract, and implements IAuditRecord, which contains methods that are shared across all audit records. There are three subclasses of AuditRecord, one for each specific type of audit record: Access, Authorization Configuration, and Resource Configuration. Each of the subclasses implements an interface.
- IAccessRecord for AccessRecords
- IAuthorizationConfigRecord for AuthorizationConfigRecords
- IResourceConfigRecord for ResourceConfigRecords.
The 2 interfaces IAuthorizationConfigRecord and IResourceConfigRecord both extend the IConfigRecord interface.
AuthorizationConfigRecords and ResourceConfigRecords can also contain a List of AuditData objects containing the properties data for the records. AuditData objects are not to be directly accessed by other modules; access to the data can be allowed through the enveloping AuthorizationConfigRecord and ResourceConfigRecord classes, through the getProperties( ) method.
Instances of the class ResourceRolePair are returned in a list for CSUAP requests. This class can be in the common folder, and is to be treated as a primitive.
Code
Java Package: com.plumtree.runner.audit
- Property: long id—Hibernate-generated, unique sequence number
- Property: String serviceName
- Property: String resourceName
- Property: Date auditDate—Hibernate-generated
- Property: String userName
- Property: String userType
- Property: long resource id
- Property: String accessURL
- Property: String accessIPAddress
- Property: int accessPrimaryAuthenticationMethod
- Property: int accessResourceAuthenticationMethod
- Property: List<AuditData> auditDataList
- Property: int actionType
- Property: List<AuditData> auditDataList
- Property: int actionType
- Property: long record_id
- Property: String properties
- Property: int pageNumber
Implementation
Audit Recording
For all the writing of all audit records, the following sequence of steps can occur:
- 1. One of 3 methods in com.plumtree.runner.persistence.PersistenceService is called: createAccessRecord( ), createAuthorizationConfigRecord( ) and createResourceConfigRecord( ) An audit record object is returned, with its type depending on the method called.
- 2. The AuditRecord object is set with the proper data.
- 3. RunnerAuditMgr.writeAudit is called with the AuditRecord.
- 4. The generated AuditRecord can be persisted through Hibernate, and depending on its actual type (Access, Authorization Config, or Resource Config), can be recorded in the proper table or tables.
Access Recording
For AccessRecord objects, the method called in com.plumtree.runner.persistence.PersistenceService is createAccessRecord( )
AccessRecord objects can be persisted to the ACCESSAUDITRECORDS table.
Authorization Configuration Recording
For AuthorizationConfigRecord objects, the method called in com.plumtree.runner.persistence.PersistenceService is createAuthorizationConfigRecord( )
AuthorizationConfigRecords can be persisted to the AUTHORIZATIONCONFIGAUDITRECORDS table.
In addition to the defined steps for all audit recording (See 8.4.1), the following can also occur:
- 1. When the AuthorizationConfigRecord is defined, member AuditData objects can also be defined.
- 2. When the AuthorizationConfigRecord is persisted through Hibernate, the AuditData data can be stored in the AUTHORIZATIONCONFIGDATA table.
Resource Configuration Recording
For ResourceConfigRecord objects, the method called in com.plumtree.runner.persistence.PersistenceService is createResourceConfigRecord( )
ResourceConfigRecords can be persisted to the RESOURCECONFIGAUDITRECORDS table.
In addition to the defined steps for all audit recording (See 8.4.1), the following can also occur:
- 1. When the ResourceConfigRecord is defined, member AuditData objects can also be defined.
- 2. When the ResourceConfigRecord is persisted through Hibernate, the AuditData data can be stored in the RESOURCECONFIGDATA table.
Audit Retrieval
Retrieving the data can be implemented in 4 methods:
- queryAccessReportByResourceID
- queryAccessReportByUserName
- queryAuthorizationConfigReport
- queryResourceConfigReport
Each method takes in a user name or resource ID, as well as a start date, an end date, a start index, and a maximum number of records to be returned. These define the necessary search parameters. A list of AuditRecords, with the actual class depending on the method called, can be generated, using Hibernate to access the DB tables. The records can be sorted by date, starting with the most recent. There are also 4 queryCount methods, which directly correspond to the query methods. They return the number of records retrieved from the given query.
For the retrieving of all audit records, the following sequence of steps can occur:
- 1. Call to RunnerAuditMgr on the desired query method type, with the necessary parameters.
- a. queryAccessReportByResourceID
- b. queryAccessReportByUserName
- c. queryAuthorizationConfigReport
- d. queryResourceConfigReport
- 2. Use Hibernate with the parameters to generate a SQL statement which can retrieve a sorted list of AuditRecords.
- 3. Return list.
There are 4 methods which export reports, one for each query. Each performs a query on the corresponding tables, and creates a CSV-formatted string that is entered into a given output stream. The methods are:
- 1. exportAccessReportByResourceID
- 2. exportAccessReportsByUserName
- 3. exportAuthorizationConfigReports
- 4. exportResourceConfigReports.
Current State of User Access Privileges (CSUAP)
Given a user name, the current state of potential user access privileges can be determined by searching through all registered policies on the Proxy server, and checking each for accessibility to the user. The apps that the given user can potentially access in the current state, combined with the roles they can access those apps as, can then be returned as ResourceRolePair objects in a list. This can be implemented in the getCurrentUserAccessPrivilegesFromUser method.
DB Tables
DB Best Practices and Partitioning
Since access audit records, when active, can cause the DB tables to increase significantly in size, and because only the reverse proxy server DB can be used for the audit records, DB partitioning can be recommended to prevent any adverse effects when accessing other tables in the reverse proxy server DB. In addition, best practices regarding the partitioning can be provided for customers.
DB Table Structure
Due to the nature of the queries supplied, indices are suggested for the following columns in each of the 3 types of AUDITRECORDS tables:
- Time
- UserName
- ResourceID
- ResourceName
Also, regarding the Properties column in both AUTHORIZATIONCONFIGDATA and RESOURCECONFIGDATA, the information stored in Properties can not be handled in the same way as Property Bags in portal.
The tables can not be kept in memory. HBM files can be used to implement this
ACCESS)
- AccessIPAddress—NOT NULL, VARCHAR(39) (xxx.xxx.xxx.xxx OR xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx for IPv6)
- AccessPrimAuthenticationMethod—NULL, VARCHAR(20)
- AccessResAuthenticationMethod—NULL, VARCHAR(20)
- ResourceID—NOT NULL, INT
- ResourceName—NOT NULL, VARCHAR(255)
AUTHORIZATIONCONFIG)
- ActionType—NOT NULL, TINYINT (1: creation, 2: change, 3: deletion)
- ResourceID—NOT NULL, INT
- ResourceName—NOT NULL, VARCHAR(255)
AUTHORIZATIONCONFIGDATA
Primary Key: RecID, PageNumber
Foreign Key: RecID
- RecID—NOT NULL, INT, DB/Hibernate-generated, unique sequence #
- Properties—NOT NULL, NVARCHAR(2000)
- PageNumber—NOT NULL, INT
RESOURCECONFIGAUDITRECORDS
Primary Key: RecID
- RecID—NOT NULL, INT, DB/Hibernate-generated, unique sequence #
- Time—NOT NULL, DATETIME, DB/Hibernate-generated
- UserName—NOT NULL, VARCHAR(255)
- UserType—NOT NULL, VARCHAR(255)
- ServiceName—NOT NOT NULL, VARCHAR(35)
- ActionType—NOT NULL, TINYINT (1: creation, 2: change, 3: deletion)
- ResourceID—NOT NULL, INT
- ResourceName—NOT NULL, VARCHAR(255)
RESOURCECONFIGDATA
Primary Key: RecID, PageNumber
Foreign Key: RecID
- RecID—NOT NULL, INT, DB/Hibernate-generated, unique sequence #
- Properties—NOT NULL, NVARCHAR(2000)
- PageNumber—NOT NULL, INT
Analytics
Tag Engine
The 6.0 portal tag engine has been componentized so that it can be reused by RProxy and possibly other projects. This componentization includes removing any Portal dependencies, edits to the API to better serve both projects, and factory based creation of engine components. Converting the Tag Transformation Engine to use factories to retrieve internal components can allow RProxy or Portal to override common components if they have specific requirements that are not met by the componentized Engine. This can also aid in testing.
Security Service
Security Service—Current Design
The current BID Security Service project can be used for:
- access the portal user and group directory
- access user profile elements
- access the capability and role store
- perform authentication
- access the user credential vault
- internal security (mainly for RAdmin) on the reverse proxy server objects.
All these features can be needed by RProxy and RAdmin.
The Security Service wrapper that is implemented for the reverse proxy server can provide us with role mapping and policy rule evaluation capabilities, which poses a strange problem . . . . Why are security features split between two different services, the BID Security Service and the reverse proxy server Security Service Wrapper?
The background behind this is that before the BEA acquisition, the BID Security Service was designed to be a one-stop-shop for all things directory related and security related. However, since Security Service already provides extensive functionality around security policies and role mapping, it is the clear choice for these features.
Security Service provides these features as raw MBeans, which means there is no compile-time API. In addition, the reverse proxy server needs to feed the reverse proxy server objects and general constructs that have no equivalent in the Security Service API (for example, a Policy Rule Builder API). These reasons justify the construction of a wrapper API for Security Service that would be used internally for the reverse proxy server project.
Directory Service and Security Service
Since Security Service needs to be wrapped, and since BID Security Service is really more of a directory service with some miscellaneous security features tacked on, the logical solution is to:
- rename the current Security Service the more accurate title of “Directory Service” and have it provide exclusively directory related features
- consolidate the security features of Security Service and the current Security Service under a new, unified API that provides exclusively security related features
- The new API would be (appropriately) called the “Security Service”
Portal Experience Rule Engine
The design and concepts of expressions and rule engine in Portal 6.0 can be reused for the reverse proxy server. However, the implementation of Portal's rule engine is tightly coupled with the portal code base, in a way that most of the rule engine implementation for the reverse proxy server would need a rewrite. In addition, there are additional refactoring and restructuring of the API to make the rule engine generic and not tied to any persistence requirement.
BEA Component Reuse
Security Service Standalone Product
Standalone Security Service can be used as the reverse proxy server's Rules Engine. To be specific, Security Service can provide the following functionality to the reverse proxy server:
- Resource Policy rule construction and evaluation, using the Security Service Xacml Authorization provider
- Resource Policy role mapping construction and evaluation, using the Security Service Xacml Role Mapping provider
- Experience Definition rule evaluation, using the Authorization provider
CWAPI—The proxy Security API (Security Service Wrapper API)
The said functionality of Security Service can be wrapped with an API for internal use by the reverse proxy server, which we can abbreviate as CWAPI. CWAPI speaks in the currency of the reverse proxy server constructs, but under the hood converts relevant objects into appropriate XACML strings in order to work with Security Service. For more information on CWAPI, refer to the section on Security Services.
CWAPI Standalone
As the reverse proxy server can be the first (and only) consumer of Security Service in the ALUI product line in the near future, the first version of the reverse proxy server can not expose CWAPI. However, depending on our findings during Springbok:
- the parts of this wrapper functionality that is relevant to Security Service might be absorbed back into Security Service itself
- alternatively, or in conjunction, the wrapper might be spun off and released internal to BID as a Security Service utility class for consumption by other ALUI products.
Resource URL Mappings
Resource registration requires entry of external and internal URLs. The user need not be concerned with whether these URLs represent a reverse proxy, transparent proxy, or combination; the user only needs to enter URLs.
External URLs are what the user sees, are input into a browser, and are represented in requests to RProxy. When external URLs point to the RProxy hostname, no DNS changes are required. (This is the reverse proxy case.) External URLs that are not based on the name of the RProxy machine require DNS changes. (This is the transparent proxy case.)
Internal URLs usually refer to machines on the intranet (though they could be out on the internet). These are the machines RProxy sends requests to and retrieves HTML content from.
Requirements for a resource URL definition are as follows, with examples below.
External URL(s) must either be a relative path like “/a”, a hostname like “crm.bea.com” or “crm”, or a combination thereof “crm.bea.com/a”. Multiple URLs can be used as aliases for the same resource. It may contain either “http://”, “https://”, or omit the protocol. If the protocol is omitted, a request with either http or https protocols can match.
Internal URL must specify a hostname “crm.internal.com” and may specify a path “crm.internal.com/a”. If no protocol is specified, http is assumed. Either http or https may be explicitly specified “”. The external URL's protocol has no effect on the internal URL's protocol.
Both external and internal URLs are stored in the database as normalized URLs. The request URL is normalized before determining the matching resource for the request URL. This is to avoid the problem if two different URL strings referring to the same physical URL location.
The algorithm for matching a request URL involves three steps. First the protocol can be removed from the request URL. The resulting string then can be used to find a matching resource by from the most specific path to the more generic path. If no matching is found, the second step can use the request with protocol. If still no matching is found, the third step is to use on the path of the request URL.
For example, the request URL is, the first step can use,,,. The second step can use,,,. The third step can use /path1/path2/resource, path1/path2, path1.
Resource URL Mapping Examples
In the following examples, proxy.runner.com refers to the reverse proxy server host. Each of the numbered entries below represents a distinct resource and below it, example http requests are provided.
The format of the examples is:
- 1) resource1 external URL=>internal URL to map to
- 2) resource2 external URL=>internal URL to map to
- example=>internal rewritten request after mapping
- Notes
- 1) /a=>crm.internal.com/a
->crm.internal.com/a
- Base case for relative URL mapping.
- 1) proxy.runner.com/a=>crm.internal.com/a
->crm.internal.com/a
- When the reverse proxy server proxy's hostname is specified explicitly, the behavior is identical to a relative mapping.
- 1) /a=>crm.internal.com
->crm.internal.com
->crm.internal.com/b
->crm.internal.com/b/c
- If a relative mapping strips the endpoint, as does 1) above when /a is excluded from the internal mapping crm.internal.com, it is stripped before being sent to the internal host.
- 1) /a=>crm.internal.com/a
- 2) /a/b=>crm.internal.com/a/b
->crm.internal.com/a
->crm.internal.com/a/b
- Subpaths are supported; most exact match is used.
- 1) /a=>crm.internal.com/a
- 2) /a/b=>finance.internal.com/a/b
->crm.internal.com/a
->finance.internal.com/a/b
- Subpaths even allow mappings to different internal hosts.
- This case may require a warning to the admin user about unintended consequences if crm/a creates links to crm/a/b that are not meant to go to finance.
- 1) crm.bea.com=>crm.internal.com
->crm.internal.com/anything
- Base case for transparent proxy mapping; requires a DNS update since the external URL differs from the reverse proxy server hostname.
- 1) crm.bea.com=>crm.internal.com
- crm [alias on the same resource]
->crm.internal.com/anything
->crm.internal.com/anything
- Multiple external URLs for a resource in a transparent proxy mapping; requests from the same domain need not be fully qualified.
- 1) finance.bea.com=>finance.internal.com
- drwho [alias on the same resource]
- finance.pt.com [alias on the same resource]
->finance.internal.com/anything
->finance.internal.com/anything
->finance.internal.com/anything
- Multiple external URLs for a resource. Internal requests can use a DNS alias. Other hostnames can be mapped via DNS to the same resource.
- 1) /a=>finance.internal.com/a
->finance.internal.com/a
->finance.internal.com/a/*
- Terminating/is ignored when matching request URLs against resources.
- Terminating/is stripped from resources if they are entered in the admin console during resource definition.—It's unclear if this is acceptable.
- The reverse proxy server itself does NOT distinguish between files and paths; the remote web server decides this.
- 1) proxy.runner.com/a=>crm.internal.com/a
- 2)>https ://crm.internal.com/b
- 3)>
- 4)>crm.internal.com/d
- 5) proxy.runner.com/e=>
- If no http or https is specified as in 1) above, http is implied for the internal URL. For the external URL, either http or https can match.
- The resource can specify http or https in the external URL and then ONLY the specified protocol can match. The internal URL can map to either https or http as shown in 2), 3), and 4) above.
- An https connection can be forced internally but allow requests to an external URL via http or https, as in 5) above.
- 1) crm.bea.com=>crm.internal.com
- 2) /a=>finance.internal.com/a
->crm.internal.com/a
->finance.internal.com/a
- When an external URL seemingly conflicts with a relative resource as 1) does with 2) above, the most exact match is applied.
- In this case, “most exact match” means hostname has a higher precedence than relative URLs.
- 1)/spaces are here=>crm.internal.com/spaces are here
->crm.internal.com/spaces%20 are%20here
- Special characters such as spaces are allowed in resource mappings. Internally, they are stored URL encoded. In the admin console UI, they are visible as spaces.
- 1) /a=>crm.internal.com/a
- 2) /a=>finance.internal.com/a
- 3) crm.bea.com=>crm.internal.com
- 4) crm.bea.com=>crm.internal.com
- 5) finance.bea.com=>finance.internal.com
- finance.pt.com
- 6) finance.pt.com=>finance.internal.com
- Each of the above pairs are prohibited. Multiple resources cannot have the same external URL.
- 1) proxy.runner.com=>crminternal.com/a
- 2) /=>crm.internal.com/b
- Mapping the RProxy hostname without any path is prohibited.
- Mapping a relative URL of/alone is prohibited.
Allowed resource mappings from external to internal are “n to 1,” that is, “n” external request URLs can be mapped to only one internal URL, per resource. Resource mappings can allow configuration of authentication source options, login page, and something akin to experience definitions.
Roles
The Administration Console provides a means of managing users and groups in roles. This means that the Administration Console can most likely have a dependency on the security framework used in the reverse proxy server. This could mean that the Administration Console would also have to run in Web Logic Server.
Pagelet Catalog UI
The Pagelet Catalog UI can provide a left hand index and right side detail view, similar to our portal tagdocs. This UI can also display the tagdocs for the reverse proxy server tags. The left hand index can be split into 2 frames, an upper one containing the tag libraries and pagelet namespaces, and a lower one containing a full list of tags and pagelets. The right hand side can contain our standard tagdocs, or a single page showing all the details of a pagelet.
Persistent Object List
- IRunnerResource is the reverse proxy server resource object. There are two types of resources. A protected resource is a resource that represents a protected application. Each protected resource must have reference type association with an experience definition object. By default a protected resource can reference a default experience definition. The second type of resource is login resource, which must not have an association with an experience definition. A protected resource that is configured as HTML form based authentication can have value type association with multiple IPTResourceRemoteLoginInfo objects.
- IExperienceDefinition is an object that specifies login, interstitial, and error pages, as well as the default authentication source and authentication method for login. IExperienceRule is an object that defines the conditional clauses (expressions), which can be evaluated at runtime to select which IExperienceDefinition to use for the current request.
- IExperienceExpression is an object that defines the conditional clauses separated by operators (AND/OR) that evaluates to either TRUE or FALSE. IExperienceExpression is part of an IExperienceRule.
- IPTResourceRemoteLoginInfo is an object that contains all information for the reverse proxy server to be able to detect login page and authenticate for resources with HTML form based authentication. Its association with the IPersistentResource is of value type where the resource object is the containing objects, which means all persistence operation of this object should be done through the resource object.
- ILoginInputField is an object that contains HTML POST input field/value pair for HTML form based authentication. Its association with IPTResourceRemoteLoginInfo is of value type where the IPTResourceRemoteLoginInfo is the containing object.
- IPersistentPagelet is the pagelet object. Pagelets are contained within a the reverse proxy server resource object. It also references a list of consuming resources. In addition, pagelets contain a list of pagelet parameters.
- IPageletParameter describes a parameter in a pagelet tag. It is contained within a pagelet, and has no external references.
- IPersistentResourceMap maintains a mapping between external URL prefixes and resources. It is used to determine the requested resource when a request enters the reverse proxy server.
- IProxyAuthenticator is used to map an authentication level to an authenticator (i.e. basic auth).
Non-persistent Object List
- IPagelet is an object that wraps an IPersistentPagelet and provides additional business logic, such as calculating the full pagelet URL.
- IPageletRequest is an object that holds all of the data necessary to process an individual pagelet, including a reference to the IPagelet object.
Persistent Object Data
Resource
Interface: IRunnerResource
- String: Name, Description, Owner, Policy Owner, Pagelet Namespace
- List<String>: External URL Prefixes
- IOKURL: Internal URL Prefix, Internal URL Secure Prefix
- Boolean: Is Enabled, Audit Flag
- IResourceAuthenticationInfo: Authentication Info
- IResourceRetrievalInfo: Retrieval Info
- List<IResourceRemoteLoginInfo>: Remote Login Info
- List<IPersistentPagelet>: Pagelets
- IBasicAuthLoginInfo: Basic Auth Login Info
- List<IRunnerResource>: Linked Resources
- List<IResourcePart>: Resource Parts
- Int: Timeout
- IResourceCSPSettings: CSP Settings
Interface IResourceAuthenticationInfo
- Boolean: Is Login Resource
- Int: Required Authentication Level
- List<String>: Internal Logout Patterns
Interface IResourceRetrievalInfo
- Map<String, String>: Proxy to Remote Header Filter, Remote To Proxy Header Filter
Interface IResourcePart
- String: Name, URL Pattern
Interface IResourceCSPSettings
- boolean: Send Login Token
- List<String>: User Info Pref Names, Session Pref Names
Remote Login Info
Interface: IResourceRemoteLoginInfo
- String: Name, Form URL, Action URL
- boolean: Use Text Pattern For Page Detection, Use Text Pattern For Action URL, Submit Action As POST
- List<ILoginInputField>: Form Fields
Interface: IBasicAuthLoginInfo
- ILoginInputField: Name, Password
Interface: ILoginInputField
- String: Name, Value
- int: Type
Pagelet
Interface: IPersistentPagelet
- String: Name, Description, Code Sample, Pagelet URL, Pagelet Secure URL, Payload Schema URL
- IRunnerResource: Parent Resource
- boolean: Publish Docs, Allow All Consumers
- List<IPageletParameter>: Parameters
- Map<String, String>: Metadata
- List<IRunnerResource>: Consuming Resources
Interface: IPageletParameter
- String: Name, Description, Type
- boolean: Is Mandatory
Resource Map
Interface: IPersistentResourceMap
- Map<String, Integer>: Resource Map
Proxy Authenticator
There can be several standard Proxy Authenticators that come with the product, and administrators can be able to create new Proxy Authenticators. The preexisting Authenticators can reference internal class names, but the administrators can only be able to configure Remote Proxy Authenticators (i.e. specify a URL, not a class name).
Interface: IProxyAuthenticator
- String: Name, Description, Integrated Authenticator Class Name, Remote Integration URL
- boolean: Is Enabled, Is Interactive, Is Remote
-
Proxy Authenticator
Interface: IProxyAuthenticator
- String: Name, Description, Integrated Authenticator Class Name
- boolean: Is Enabled, Is Interactive
-
Tag Libraries
The reverse proxy server can provide both it's own unique tags, as well as extensions to the common tags provided by the Portal
The reverse proxy server Tags
These tags can all be included in the reverse proxy server tag library.
These tags behave similarly to the ptdata tags in the Portal.
The pagelet tag is simply the way to add a pagelet to an HTML page.
This tag can require serious thought to get right. UI pickers are extremely complicated.
Common Tags
These tags are shared between the reverse proxy server and portal. They include new tags (with the reverse proxy server specific implementations) and enhancements to the pre-existing general tag libraries for specific reverse proxy server needs.
If Tag Set
Since the reverse proxy server does not include the Portal Standard tags, such as the choose/when/otherwise tag, we can take this opportunity to develop a more flexible way of doing conditionals using tags. This way we can have a variety of expr tags (such as the reverse proxy server roleexpr tag) that store boolean results for use by the if tag. This simplifies the tag, and avoids the problem of having to figure out if the expression is an integer expression, a role expression, or something else.
The corresponding if-true and if-false tags do not have any attributes. They are simply displayed or not displayed depending on the results of the if is necessary to enable conditional logic based on how many auth sources or resources are returned (i.e. empty, a single item, or a list).
Variable Tag Enhancement
This tag needs to be converted for use with the collection tag so that you can create collections of variables, in addition to collections of data objects. This is useful for the rolesdata tag. The collection tag also needs.
Security Model and Threat Assessment
The most important security issue in RProxy is the use of the Credential Vault. In order to manage authentication to back-end resources, the reverse proxy server stores a user's username and password for each resource in a Credential Vault. While this information can be encrypted, the reverse proxy server needs to be able to extract the non-encrypted data from the vault in order to pass the username and password to the resource. This is dangerous because if the reverse proxy server can decrypt the Credential Vault, someone else could figure out how to do it as well (i.e. steal the reverse proxy server's encryption key, which needs to be stored in the database or file system). This could provide someone with direct access to back-end resources using another user's credentials (i.e. accessing the finance app as the CEO). One way to avoid this problem is to encrypt the secondary passwords in the credential vault with the user's primary password, so that more than just the server's encryption key is needed for decryption.
Because of this, at the very least customers need to be able to disable this feature. They can currently do this on a per-resource basis by leaving the authentication set to NONE (default). Since Analytics uses the PMB, the reverse proxy server documentation should include a security advisory that this multicast UDP traffic can be sniffed and therefore the network containing the reverse proxy server and Analytics servers should be isolated from the rest of the network..
|
https://patents.google.com/patent/US8397283
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
Use Inkscape and XSLT to Create Cross-Platform Reports and Forms
Listing 5. A Portion of the PHP Script That Transforms the Claim XML into an SVG and Displays It in a Browser
// import the SVG XSLT $xsl = new XSLTProcessor(); $xsl->importStyleSheet(DOMDocument::load("svg_xslt.xsl")); // load the claim data XML // $claim is the database result from Listing 4 $doc = new DOMDocument(); $doc->loadXML($claim); // tell the browser this is an SVG document header("Content-Type: image/svg+xml"); // print the SVG to the browser echo $xsl->transformToXML($doc);
Listing 5 is a simplified version of our solution. In our solution, there is the possibility of having multiple pages for a single claim. To fix this, we had to do multiple transformations, one for each page. To get the multiple-page claims to display in the same browser window, we had to embed them. This can be done using the embed and object HTML tags. Note that there are several issues with browser compatibility when using these tags. To solve the compatibility issues, we wrote a script that checks the user's browser and decides which tag to use. Then, we set the target object data/embedded source to a script similar to the one in Listing 5. This allowed the Web browser to display multiple SVG images in the same window.
Other considerations must be made when using SVG images in a Web browser environment. Internet Explorer does not have native support for SVG images. The user is forced to use a third-party plugin to display the images. Adobe provides one of these for free. Mozilla Firefox has built-in support for SVG images starting with version 1.5. However, Firefox does not support several aspects of SVG images, such as scaling and grouped objects. Fortunately for us, all of our users use an up-to-date version of Firefox.
That is all there is to it. Figure 5 shows a claim image with all of the data filled in.
Once we finished the Web end of our solution, we turned our sights toward the rest of our integration. This meant we had to print the SVG images and find a way to archive them. Some clients request that we send them copies of the claims printed and/or electronically. Because all of our back-end software is written in Python, it also meant we had to do the XML transformation in a different language. To do all of the XML work, we used the 4Suite XML API.
To print the images, we again turned to Inkscape, because our PostScript printer drivers would not print the SVG images. Inkscape has a handful of command-line options that tell Inkscape to run in command-line mode, thus suppressing the graphical interface. The one we used to print is the -p option. This, combined with the lpr command, allowed us to print our images without any user interaction. Listing 6 shows how we did the same transform we did in Listing 5, except now in Python. The example also shows how we called Inkscape to print our claim images.
Listing 6. Same Transform as Shown in Listing 5, Except Using Python
from Ft.Xml.Xslt import Processor from Ft.Xml import InputSource from Ft.Xml.Domlette import NonvalidatingReader // load the claim data XML // claim is the database result from Listing 4 doc = NonvalidatingReader.parseString(claim, "") // load and process the XSLT xsl = InputSource.DefaultFactory.fromUri("") processor = Processor.Processor() processor.appendStylesheet(xsl) // do the transformation result = processor.runNode(doc, "") // write the SVG to a file f = open("/tmp/"+ claim +".svg", "w") f.write(result) f.close() // print the image on the default printer os.system("inkscape /tmp/"+ claim +".svg -p | lpr")
Earlier, I mentioned we often have multiple pages per claim. When printing, this was not an issue; we simply would send each page to the printer as a separate job. When it came to archiving, we had to do something different. As with the Web interface, we had to group the pages, this time into a file, not a Web browser. When archiving, we had to store the files in PDF format, because that is what our clients wanted. To get the images into a PDF and combine the multiple page claims, we used Inkscape and Ghostscript.
As with printing, Inkscape has an option to export a file into PostScript format. Instead of using -p, we use -P and pass Inkscape the desired output filename. After all of the pages of a claim have been written to files, we use the following Ghostscript command to put the pages into a single PDF and archive them:
gs -dBATCH -dNOPAUSE -q -sDEVICE=pdfwrite -sOutputFile=out.pdf /tmp/foo1.ps /tmp/foo2.ps.
|
http://www.linuxjournal.com/article/9283?page=0,2&quicktabs_1=2
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
Hello.Andreas Gruenbacher.AppArmor can't determine which pathname (/tmp/public/file or /tmp/secret/file)was requested by touch command if bound mount is used in the following way# mkdir /tmp/public /tmp/secret# mount -t tmpfs none /tmp/public# mount --bind /tmp/public /tmp/secret# touch /tmp/public/filebecause security_inode_create() doesn't receive vfsmount, can it?It is possible to determine that the requested pathname is either/tmp/public/file or /tmp/secret/file by comparing address of vfsmountavailable from current->namespace, but it is impossible to determine which one.> the path being checked for the creat call must be "/tmp/b/f", even though > process A never explicitly used "b". If that's not what TOMOYO is doing, then > that's badly broken.Yes, of course, TOMOYO checks "/tmp/b/f". What I meant PROCEDURE FOR REACHING is"which directory does the process need to go through to reach the requested fileif the process's current directory is the root of the process's namespace".And in this case, it is "/tmp/b/f".Thanks.-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
|
https://lkml.org/lkml/2007/5/29/358
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
I promised in the last post that I’d show how to do template extensibility and customization using inheritance with regular templates, rather than the preprocessed kind, so here goes.
The preprocessed solution was a three-layer design.
This system works quite well, but it’s a bit clumsy, needing as it does harness templates to call the preprocessed ones and smearing application concepts (Book and Author) across two of its layers. It would be nice to move to just two layers
I scratched my head for a couple of hours and came up with this:
Much cleaner! So what’s different that’s enabled us to dispense with the intermediate ‘Templates’ project?
Here’s how that inheritance hierarchy looks:
OK, so that’s all the dirty mechanics; let’s take a look at the new, single-layer application templates:
1: <#@ template debug="false" hostspecific="false" language="C#" inherits="BaseTemplates.DataClass" #>
2: <#@ assembly name="$(SolutionDir)\BaseTemplates\$(OutDir)\BaseTemplates.dll" #>
3: <#@ import namespace="BaseTemplates" #>
4: <#@ include file="ProjectSpecific.t4" #>
5: <#@ output extension=".cs" #>
6: <#
7: this.Description = new TypeDescription
8: {
9: Name="Author", Description="A class to carry data about an author in a library system.",
10: Properties=
11: {
12: new TypePropertyDescription{Name="ID", Type="string", Description="The ID of the author."},
13: new TypePropertyDescription{Name="GivenName", Type="string", Description="The given name of the author."},
14: new TypePropertyDescription{Name="FamilyName", Type="string", Description="The family name of the author."},
15: }
16: };
17: #>
18: namespace TemplateUse
19: {
20: using System;
21:
22: <#
23: PushIndent();
24: base.TransformText();
25: PopIndent();
26: #>
27: }
This is essentially a coalescing of the Book.tt and BookConsumer.tt templates from the previous version of the solution, but with much less T4 infrastructure on show. Once again, the project-specific customizations for richer comments and Serialization are included from ProjectSpecific.t4, which is unchanged, and all of the application-level code is now in a single, clean layer.
I think this gives us a really nice pattern now.
Here’s the updated demo solution :
For the curious, what’s going on under the hood here? The key to this variation is that regular T4 templates MUST be derived ultimately from the built-in T4 base class, Microsoft.VisualStudio.TextTemplating.TextTransformation (top left in the class diagram). Preprocessed templates were specifically designed NOT to take a dependency on Visual Studio, so they lift this restriction and instead rely on duck-typing. They simply require a base class with the same set of methods as the built-in one.
So why can’t DataClass inherit directly from the built-in base? It turns out that the duck-typing match is not exact and in particular, the calls to a utility class, ToStringHelper, are instance-based in the case of preprocessed templates and static in the case of regular templates. The adapter class simply reroutes instance-based calls to the static class. With this one little trick, regular templates can inherit from any preprocessed template that in turn inherits from the adapter class.
|
http://blogs.msdn.com/b/garethj/archive/2011/01/04/vs2010-sp1-t4-template-inheritance-part-iv-regular-template-inheritance.aspx
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
An Alternative Python Reference: Markup
Fredrik Lundh | January 2006
(This page belongs to the Alternative Python Reference project.)
Markup
The tools use a preliminary markup format, based on basic HTML with JavaDoc tags for semantic markup. JavaDoc is widely used, and works well for reference documentation which tends to use mostly library/language related semantic markup. And if you strip down HTML and use other markup to create links, it’s not much left. Only a simple tag here and there.
In a full JavaDoc system, the tools gets the program structure from the source code. In this system, the tools derive structure from special section tags instead.
@@module __builtin__ Built-in functions, exceptions, and other objects. @@function len(obj) Returns the number of items in a sequence or mapping.
@@module, @@function etc. are section tags. All the text inside a section describe one target object (in this case, the len function) inside a target scope (in this case, the builtin namespace).
Standard PythonDoc markup can be used to add more details to a section. If given, the @def directive overrides the signature given in the section marker.
@@function range This is a versatile function to create lists containing arithmetic progressions. It is most often used in {@link python:for} loops. All arguments must be plain integers. <p> The full form returns a list of plain integers [{@var start}, {@var start} + {@var step}, {@var start} + 2 * {@var step}, ...]. If {@var step} is positive, the last element is the largest such value less than {@var stop}; if {@var step} is negative, the last element is the smallest value greater than {@var stop}. <p> In the abbreviated form, {@var start} is set to 0, and {@var step} to 1. @def range(stop) @def range(start, stop, step=1) @param start The start value. Defaults to 0. @param stop The stop value. This is not included in the returned sequence. @param step The non-zero step value. Defaults to 1. @return A list containing an integer sequence. @raises ValueError If {@var step} is zero.
The {@var} directive is used to mark arguments inside a function or method description. The inline {@link} directive is used to refer to other target objects.
Notes
The translator uses inline @link elements to mark all known reference targets. It’s up to the renderer to decide what references to turn into hyperlinks in the resulting document. An HTML renderer may for example link only the first occurrence of each item within a section, or use CSS to make subsequent links less prominent (unless the user hovers over them).
{@var}, {@samp} etc. are non-standard tags. Use <var> instead?
Using {@link} for external links (http:) is overkill. Use <a> instead!
|
http://effbot.org/zone/pyref-markup.htm
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
public class RoutingSlip extends ServiceSupport implements AsyncProcessor, Traceable
Pipelinein the async variation as the failover load balancer is a specialized pipeline. So the trick is to keep doing the same as the pipeline to ensure it works the same and the async routing engine is flawless.
shutdown, shuttingdown, started, starting, stopped, stopping, suspended, suspending
doResume,
protected final org.slf4j.Logger log
protected ProducerCache producerCache
protected int cacheSize
protected boolean ignoreInvalidEndpoints
protected String header
protected Expression expression
protected String uriDelimiter
protected final CamelContext camelContext
public RoutingSlip(CamelContext camelContext)
public RoutingSlip(CamelContext camelContext, Expression expression, String uriDelimiter)
public void setDelimiter(String delimiter)
public boolean isIgnoreInvalidEndpoints()
public void setIgnoreInvalidEndpoints(boolean ignoreInvalidEndpoints)
public int getCacheSize()
public void setCacheSize(int cacheSize)
public String toString()
toStringin class
Object
public String getTraceLabel()
getTraceLabelin interface
Traceable boolean doRoutingSlip(Exchange exchange, Object routingSlip, AsyncCallback callback)
protected RoutingSlip.RoutingSlipIterator createRoutingSlipIterator(Exchange exchange) throws Exception
exchange- the exchange
Exception
protected Endpoint resolveEndpoint(RoutingSlip.RoutingSlipIterator iter, Exchange exchange) throws Exception
Exception
protected Exchange prepareExchangeForRoutingSlip(Exchange current, Endpoint endpoint)
protected boolean processExchange(Endpoint endpoint, Exchange exchange, Exchange original, AsyncCallback callback, RoutingSlip.RoutingSlipIterator iter)
protected void doStart() throws Exception
ServiceSupport.doStop()for more details.
doStartin class
ServiceSupport
Exception
ServiceSupport.doStop()
protected void doStop() throws Exception()
protected void doShutdown() throws Exception
doShutdownin class
ServiceSupport
Exception
Apache Camel
|
http://camel.apache.org/maven/current/camel-core/apidocs/org/apache/camel/processor/RoutingSlip.html
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
getpwuid_r()
Get information about the user with a given ID
Synopsis:
#include <sys/types.h> #include <pwd.h> int getpwuid_r( uid_t uid, struct passwd* pwd, char* buffer, size_t bufsize, struct passwd** result );
Since:
BlackBerry 10.0.0
Arguments:
- uid
- The userid whose entry you want to find.
-wnam:
Classification:
Last modified: 2014-06-24
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
|
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/g/getpwuid_r.html
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
/* * Copyright url.h,v 1.6 2006/11/30 17:11:22 murch Exp $ */ #ifndef IMAPURL_H #define IMAPURL_H struct imapurl { char *freeme; /* copy of original URL + decoded mailbox; caller must free() */ /* RFC 2192 */ const char *user; const char *auth; const char *server; const char *mailbox; unsigned long uidvalidity; unsigned long uid; const char *section; /* RFC 2192bis */ unsigned long start_octet; unsigned long octet_count; /* URLAUTH */ struct { const char *access; const char *mech; const char *token; time_t expire; size_t rump_len; } urlauth; }; /* Convert hex coded UTF-8 URL path to modified UTF-7 IMAP mailbox * mailbox should be about twice the length of src to deal with non-hex * coded URLs; server should be as large as src. */ int imapurl_fromURL(struct imapurl *url, const char *src); /* Convert an IMAP mailbox to a URL path * dst needs to have roughly 4 times the storage space of mailbox * Hex encoding can triple the size of the input * UTF-7 can be slightly denser than UTF-8 * (worst case: 8 octets UTF-7 becomes 9 octets UTF-8) * * it is valid for mechname to be NULL (implies anonymous mech) */ void imapurl_toURL(char *dst, struct imapurl *url); #endif /* IMAPURL_H */
|
http://opensource.apple.com/source/CyrusIMAP/CyrusIMAP-188/cyrus_imap/lib/imapurl.h
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
CS::RenderManager::ShadowSettings Class Reference
Helper to read shadow handler settings. More...
#include <csplugincommon/rendermanager/shadow_common.h>
Detailed Description
Helper to read shadow handler settings.
Definition at line 40 of file shadow_common.h.
Member Function Documentation
Do per-frame house keeping - MUST be called every frame/ RenderView() execution, typically from the shadow handler's persistent data UpdateNewFrame() method.
Read settings from configuration (such as targets, default shader etc.
). shadowType is used as a part of the settings configuration keys (e.g.
RenderManager.Shadows.(type).Shader.Type). See
data/config-plugins/shadows.cfg for shadow settings examples.
Member Data Documentation
Post processing effects to apply to shadow map.
Definition at line 77 of file shadow_common.h.
Whether to provide IDs for each shadowed mesh.
Definition at line 72 of file shadow_common.h.
Default shader for rendering to shadow map.
Definition at line 68 of file shadow_common.h.
Shader type for rendering to shadow map.
Definition at line 70 of file shadow_common.h.
Shader variable taking ID for a mesh.
Definition at line 74 of file shadow_common.h.
Targets fir shadow maps.
Definition at line 66 of file shadow_common.h.
The documentation for this class was generated from the following file:
- csplugincommon/rendermanager/shadow_common.h
Generated for Crystal Space 2.0 by doxygen 1.6.1
|
http://www.crystalspace3d.org/docs/online/new0/classCS_1_1RenderManager_1_1ShadowSettings.html
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
screen_get_window_property_pv()
Retrieve the current value of the specified window property of type void*.
Synopsis:
#include <screen/screen.h>
int screen_get_window_property_pv(screen_window_t win, int pname, void *
|
https://developer.blackberry.com/native/reference/core/com.qnx.doc.screen.lib_ref/topic/screen_get_window_property_pv.html
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
Strongly Encouraged Development Guidelines
This section describes guidelines that you should follow when you write your cmdlets. They are separated into guidelines for designing cmdlets and guidelines for writing your cmdlet code. You might find that these guidelines are not applicable for every scenario. However, if they do apply and you do not follow these guidelines, your users might have a poor experience when they use your cmdlets.
Design Guidelines
Code Guidelines
Design Guidelines
The following guidelines should be followed when designing cmdlets to ensure a consistent user experience between using your cmdlets and other cmdlets. When you find a Design guideline that applies to your situation, be sure to look at the Code guidelines for similar guidelines.
Use a Specific Noun for a Cmdlet Name (SD01)
Nouns used in cmdlet naming need to be very specific so that the user can discover your cmdlets. Prefix generic nouns such as "server" with a shortened version of the product name. For example, if a noun refers to a server that is running an instance of Microsoft SQL Server, use a noun such as "SQLServer". The combination of specific nouns and the short list of approved verbs enable the user to quickly discover and anticipate functionality while avoiding duplication among cmdlet names.
To enhance the user experience, the noun that you choose for a cmdlet name should be singular. For example, use the name Get-Process instead of Get-Processes. It is best to follow this rule for all cmdlet names, even when a cmdlet is likely to act upon more than one item.
Use Pascal Case for Cmdlet Names (SD02)
Use Pascal case for parameter names. In other words, capitalize the first letter of verb and all terms used in the noun. For example, “Clear-ItemProperty”.
Parameter Design Guidelines (SD03)
A cmdlet needs parameters that receive the data on which it must operate, and parameters that indicate information that is used to determine the characteristics of the operation. For example, a cmdlet might have a Name parameter that receives data from the pipeline, and the cmdlet might have a Force parameter to indicate that the cmdlet can be forced to perform its operation. There is no limit to the number of parameters that a cmdlet can define.
Use Standard Parameter Names
Your cmdlet should use standard parameter names so that the user can quickly determine what a particular parameter means. If a more specific name is required, use a standard parameter name, and then specify a more specific name as an alias. For example, the Get-Service cmdlet has a parameter that has a generic name (Name) and a more specific alias (ServiceName). Both terms can be used to specify the parameter.
For more information about parameter names and their data types, see Standard Cmdlet Parameter Names and Types.
Use Singular Parameter Names
Avoid using plural names for parameters whose value is a single element. This includes parameters that take arrays or lists because the user might supply an array or list with only one element.
Plural parameter names should be used only in those cases where the value of the parameter is always a multiple-element value. In these cases, the cmdlet should verify that multiple elements are supplied, and the cmdlet should display a warning to the user if multiple elements are not supplied.
Use Pascal Case for Parameter Names
Use Pascal case for parameter names. In other words, capitalize the first letter of each word in the parameter name, including the first letter of the name. For example, the parameter name ErrorAction uses the correct capitalization. The following parameter names use incorrect capitalization:
- errorAction
- erroraction
Parameters That Take a List of Options
There are two ways to create a parameter whose value can be selected from a set of options.
- Define an enumeration type (or use an existing enumeration type) that specifies the valid values. Then, use the enumeration type to create a parameter of that type.
- Add the ValidateSet attribute to the parameter declaration. For more information about this attribute, see ValidateSet Attribute Declaration.
Use Standard Types for Parameters
To ensure consistency with other cmdlets, use standard types for parameters where ever possible. the types that should be used for different parameter, see Standard Cmdlet Parameter Names and Types. This topic provides links to several topics that describe the names and .NET Framework types for groups of standard parameters, such as the “activity parameters”.
Use Strongly-Typed .NET Framework Types
Parameters should be defined as .NET Framework types to provide better parameter validation. For example, parameters that are restricted to one value from a set of values should be defined as an enumeration type. To support a Uniform Resource Identifier (URI) value, define the parameter as a Uri type. Avoid basic string parameters for all but free-form text properties.
Use Consistent Parameter Types
When the same parameter is used by multiple cmdlets, always use the same parameter type. For example, if the Process parameter is an Int16 type for one cmdlet, do not make the Process parameter for another cmdlet a UInt16 type.
Parameters That Take True and False
If your parameter takes only true and false, define the parameter as type SwitchParameter. A switch parameter is treated as true when it is specified in a command. If the parameter is not included in a command, Windows PowerShell considers the value of the parameter to be false. Do not define Boolean parameters.
If your parameter needs to differentiate between 3 values: $true, $false and “unspecified”, then define a parameter of type Nullable<bool>. The need for a 3rd, "unspecified" value typically occurs when the cmdlet can modify a Boolean property of an object. In this case "unspecified" means to not change the current value of the property.
Support Arrays for Parameters
Frequently, users must perform the same operation against multiple arguments. For these users, a cmdlet should accept an array as parameter input so that a user can pass the arguments into the parameter as a Windows PowerShell variable. For example, the Get-Process cmdlet uses an array for the strings that identify the names of the processes to retrieve.
Support the PassThru Parameter
By default, many cmdlets that modify the system, such as the Stop-Process cmdlet, act as "sinks" for objects and do not return a result. These cmdlet should implement the PassThru parameter to force the cmdlet to return an object. When the PassThru parameter is specified, the cmdlet returns an object by using a call to the WriteObject method. For example, the following command stops the Calc process and passes the resultant process to the pipeline.
In most cases, Add, Set, and New cmdlets should support a PassThru parameter.
Support Parameter Sets
A cmdlet is intended to accomplish a single purpose. However, there is frequently more than one way to describe the operation or the operation target. For example, a process might be identified by its name, by its identifier, or by a process object. The cmdlet should support all the reasonable representations of its targets. Normally, the cmdlet satisfies this requirement by specifying sets of parameters (referred to as parameter sets) that operate together. A single parameter can belong to any number of parameter sets. For more information about parameter sets, see Cmdlet Parameter Sets.
When you specify parameter sets, set only one parameter in the set to ValueFromPipeline. For more information about how to declare the Parameter attribute, see Parameter Attribute Declaration.
When parameter sets are used, the default parameter set is defined by the Cmdlet attribute. The default parameter set should include the parameters most likely to be used in an interactive Windows PowerShell session. For more information about how to declare the Cmdlet attribute, see Cmdlet Attribute Declaration.
Provide Feedback to the User (SD04)
Use the guidelines in this section to provide feedback to the user. This feedback allows the user to be aware of what is occurring in the system and to make better administrative decisions.
The Windows PowerShell runtime allows a user to specify how to handle output from each call to the Write method by setting a preference variable. The user can set several preference variables, including a variable that determines whether the system should display information and a variable that determines whether the system should query the user before taking further action.
Support the WriteWarning, WriteVerbose, and WriteDebug Methods
A cmdlet should call the WriteWarning method when the cmdlet is about to perform an operation that might have an unintended result. For example, a cmdlet should call this method if the cmdlet is about to overwrite a read-only file.
A cmdlet should call the WriteVerbose method when the user requires some detail about what the cmdlet is doing. For example, a cmdlet should call this information if the cmdlet author feels that there are scenarios that might require more information about what the cmdlet is doing.
The cmdlet should call the WriteDebug method when a developer or product support engineer must understand what has corrupted the cmdlet operation. It is not necessary for the cmdlet to call the WriteDebug method in the same code that calls the WriteVerbose method because the Debug parameter presents both sets of information.
Support WriteProgress for Operations that take a Long Time
Cmdlet operations that take a long time to complete and that cannot run in the background should support progress reporting through periodic calls to the WriteProgress method.
Use the Host Interfaces
Occasionally, a cmdlet must communicate directly with the user instead of by using the various Write or Should methods supported by the Cmdlet class. In this case, the cmdlet should derive from the PSCmdlet class and use the Host property. This property supports different levels of communication type, including the PromptForChoice, Prompt, and WriteLine/ReadLine types. At the most specific level, it also provides ways to read and write individual keys and to deal with buffers.
Unless a cmdlet is specifically designed to generate a graphical user interface (GUI), it should not bypass the host by using the Host property. An example of a cmdlet that is designed to generate a GUI is the Out-GridView cmdlet.
Create a Cmdlet Help File (SD05)
For each cmdlet assembly, create a Help.xml file that contains information about the cmdlet. This information includes a description of the cmdlet, descriptions of the cmdlet's parameters, examples of the cmdlet's use, and more.
Code Guidelines
The following guidelines should be followed when coding cmdlets to ensure a consistent user experience between using your cmdlets and other cmdlets. When you find a Code guideline that applies to your situation, be sure to look at the Design guidelines for similar guidelines.
Coding Parameters (SC01)
Define a parameter by declaring a public property of the cmdlet class that is decorated with the Parameter attribute. Parameters do not have to be static members of the derived .NET Framework class for the cmdlet. For more information about how to declare the Parameter attribute, see Parameter Attribute Declaration.
Support Windows PowerShell Paths
The Windows PowerShell path is the mechanism for normalizing access to namespaces. When you assign a Windows PowerShell path to a parameter in the cmdlet, the user can define a custom "drive" that acts as a shortcut to a specific path. When a user designates such a drive, stored data, such as data in the Registry, can be used in a consistent way.
If your cmdlet allows the user to specify a file or a data source, it should define a parameter of type String. If more than one drive is supported, the type should be an array. The name of the parameter should be Path, with an alias of PSPath. Additionally, the Path parameter should support wildcard characters. If support for wildcard characters is not required, define a LiteralPath parameter.
If the data that the cmdlet reads or writes has to be a file, the cmdlet should accept Windows PowerShell path input, and the cmdlet should use the System.Management.Automation.SessionState.Path property to translate the Windows PowerShell paths into paths that the file system recognizes. The specific mechanisms include the following methods:
- System.Management.Automation.PSCmdlet.GetResolvedProviderPathFromPSPath(System.String,System.Management.Automation.ProviderInfo)
- System.Management.Automation.PSCmdlet.GetUnresolvedProviderPathFromPSPath(System.String)
- System.Management.Automation.PathIntrinsics.GetResolvedProviderPathFromPSPath(System.String,System.Management.Automation.ProviderInfo)
- System.Management.Automation.PathIntrinsics.GetUnresolvedProviderPathFromPSPath(System.String)
If the data that the cmdlet reads or writes is only a set of strings instead of a file, the cmdlet should use the provider content information (Content member) to read and write. This information is obtained from the InvokeProvider property. These mechanisms allow other data stores to participate in the reading and writing of data.
Support Wildcard Characters
A cmdlet should support wildcard characters if possible. Support for wildcard characters occurs in many places in a cmdlet (especially when a parameter takes a string to identify one object from a set of objects). For example, the sample Stop-Proc cmdlet from the StopProc Tutorial defines a Name parameter to handle strings that represent process names. This parameter supports wildcard characters so that the user can easily specify the processes to stop.
When support for wildcard characters is available, a cmdlet operation usually produces an array. Occasionally, it does not make sense to support an array because the user might use only a single item at a time. For example, the Set-Location cmdlet does not need to support an array because the user is setting only a single location. In this instance, the cmdlet still supports wildcard characters, but it forces resolution to a single location.
For more information about wildcard-character patterns, see Supporting Wildcard Characters in Cmdlet Parameters.
Defining Objects
This section contains guidelines for defining objects for cmdlets and for extending existing objects.
Define Standard Members
Define standard members to extend an object type in a custom Types.ps1xml file (use the Windows PowerShell Types.ps1xml file as a template). Standard members are defined by a node with the name PSStandardMembers. These definitions allow other cmdlets and the Windows PowerShell runtime to work with your object in a consistent way.
Define ObjectMembers to Be Used as Parameters
If you are designing an object for a cmdlet, ensure that its members map directly to the parameters of the cmdlets that will use it. This mapping allows the object to be easily sent to the pipeline and to be passed from one cmdlet to another.
Preexisting .NET Framework objects that are returned by cmdlets are frequently missing some important or convenient members that are needed by the script developer or user. These missing members can be particularly important for display and for creating the correct member names so that the object can be correctly passed to the pipeline. Create a custom Types.ps1xml file to document these required members. When you create this file, we recommend the following naming convention: <Your_Product_Name>.Types.ps1xml.
For example, you could add a Mode script property to the FileInfo type to display the attributes of a file more clearly. Additionally, you could add a Count alias property to the Array type to allow the consistent use of that property name (instead of Length).
Implement the IComparable Interface
Implement a IComparable interface on all output objects. This allows the output objects to be easily piped to various sorting and analysis cmdlets.
Update Display Information
If the display for an object does not provide the expected results, create a custom <YourProductName>.Format.ps1xml file for that object.
Support Well Defined Pipeline Input (SC02)
Implement for the Middle of a Pipeline
Implement a cmdlet assuming that it will be called from the middle of a pipeline (that is, other cmdlets will produce its input or consume its output). For example, you might assume that the Get-Process cmdlet, because it generates data, is used only as the first cmdlet in a pipeline. However, because this cmdlet is designed for the middle of a pipeline, this cmdlet allows previous cmdlets or data in the pipeline to specify the processes to retrieve.
Support Input from the Pipeline
In each parameter set for a cmdlet, include at least one parameter that supports input from the pipeline. Support for pipeline input allows the user to retrieve data or objects, to send them to the correct parameter set, and to pass the results directly to a cmdlet.
A parameter accepts input from the pipeline if the Parameter attribute includes the ValueFromPipeline keyword, the ValueFromPipelineByPropertyName keyword attribute, or both keywords in its declaration. If none of the parameters in a parameter set support the ValueFromPipeline or ValueFromPipelineByPropertyName keywords, the cmdlet cannot meaningfully be placed after another cmdlet because it will ignore any pipeline input.
Support the ProcessRecord Method
To accept all the records from the preceding cmdlet in the pipeline, your cmdlet must implement the ProcessRecord method. Windows PowerShell calls this method multiple times, once for every record that is sent to your cmdlet.
Write Single Records to the Pipeline (SC03)
When a cmdlet returns objects, the cmdlet should write the objects immediately as they are generated. The cmdlet should not hold them in order to buffer them into a combined array. The cmdlets that receive the objects as input will then be able to process, display, or process and display the output objects without delay. A cmdlet that generates output objects one at a time should call the System.Management.Automation.Cmdlet.WriteObject(System.Object) method. A cmdlet that generates output objects in batches (for example, because an underlying API returns an array of output objects) should call the System.Managemet.Automation.Cmdlet.WriteObject(System.Object,System.Boolean) Method with its second parameter set to true.
Make Cmdlets Case-Insensitive and Case-Preserving (SC04)
By default, Windows PowerShell itself is case-insensitive. However, because it deals with many preexisting systems, Windows PowerShell does preserve case for ease of operation and compatibility. In other words, if a character is supplied in uppercase letters, Windows PowerShell keeps it in uppercase letters. For systems to work well, a cmdlet needs to follow this convention. If possible, it should operate in a case-insensitive way. It should, however, preserve the original case for cmdlets that occur later in a command or in the pipeline.
See Also
ConceptsRequired Development Guidelines
Advisory Development Guidelines
Other ResourcesWriting a Windows PowerShell Cmdlet
|
https://msdn.microsoft.com/en-us/library/windows/desktop/dd878270.aspx
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
On Fri, 2008-08-01 at 08:44 -0700, Randy Dunlap wrote:> On Thu, 31 Jul 2008 23:58:39 -0700 Eric Anholt wrote:> > >.> > "the 2d driver" ... "the 3d driver".> Just curious: Is there only one of each of these?"The 2D driver" means xf86-video-intel xorg driver in userland."The 3D driver" means i9[16]5_dri.so mesa driver in userland.> > diff --git a/drivers/gpu/drm/drm_agpsupport.c b/drivers/gpu/drm/drm_agpsupport.c> > index aefa5ac..2639be2 100644> > --- a/drivers/gpu/drm/drm_agpsupport.c> > +++ b/drivers/gpu/drm/drm_agpsupport.c> > @@ -33,6 +33,7 @@> > > > #include "drmP.h"> > #include <linux/module.h>> > +#include <asm/agp.h>> > > > #if __OS_HAS_AGP> > > > @@ -452,4 +453,52 @@ int drm_agp_unbind_memory(DRM_AGP_MEM * handle)> > return agp_unbind_memory(handle);> > }> > > > -#endif /* __OS_HAS_AGP */> > +/**> > In the kernel source tree, "/**" means "beginning of kernel-doc notation",> so please don't use it when kernel-doc isn't being used.> (in multiple places/files)The codebase this is coming from uses doxygen. I hadn't checked to seeif all the doxygen was converted to kernel-doc.-- Eric Anholteric@anholt.net eric.anholt@intel.com[unhandled content-type:application/pgp-signature]
|
http://lkml.org/lkml/2008/8/5/491
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
"Activator" to "Plugin"
Changed the last sentence to "The information in this book is based
on the 1.2 FCS release.".
Changed "(but not often used)" to "(and are deprecated)".
:"
Changed "-usepolicy" to "-Djava.security.manager"
sidebar, last paragraph now reads:
Beginning in 1.2 beta 3, the Launcher class was incorporated into
the virtual machine itself, but the syntax to use it changed in
the last few beta releases. In FCS, the correct syntax is:
piccolo% java -Djava.security.manager Cat /etc/passwd
2nd paragraph now reads:
... and readObject() methods. The writeObject() method is
responsible for writing out all data in the class; it typically uses
the defaultWriteObject() method to write out all non-transient data
and then its writes the transient data out in any format it desires.
Similarly, the readObject() method uses the defaultReadObject()
method to read the data and then must restore the corresponding
transient data. It's your decision how to save and reconstitute the
transient data so that its ...
3rd paragraph now reads:
The last clause of the last sentence should be
--no data can be stored or reconstituted by any default methods.
Changed "zip" to "JAR" (added to the JAR file containing...)
Changed "otherslocated" to "others located"
Change "classes.zip" to "rt.jar"
concluded added:
[Paragraph]
An instance of the URLClassLoader class may also be obtained via one of
these methods:
[ListVariableTerm]
public static URLClassLoader newInstance(URL[] urls) [filled-star dingbat]
public static URLClassLoader newInstance(URL[] urls, ClassLoader parent) [filled-star dingbat]
[ListVariable] that we referred to earlier. In
1.2, only class loaders obtained in this way will perform that
optional step (unless, of course, you write your own class loader
that performs that step).
Changed the first two sentences to:
In 1.2, the URLCLassLoader class fails to handle multiple HTTP-based
URLs correctly. It is hoped that this will be fixed someday; if it
is not and ...
URLClassLoader class, the invokeClass()...").
The code segment incorrectly defined the jrl instance variable twice.
The correct code now reads:
Class self = Class.forName("JavaRunner");
JavaRunnerLoader jrl = new
JavaRunnerLoader(args[0], self.getClassLoader());
ThreadGroup tg = new ThreadGroup("JavaRunner Threadgroup");
variable twice. Corrected the definition of jrl to read:
Class self = Class.forName("JavaRunner");
JavaRunnerLoader jrl =
new JavaRunnerLoader(args[0], self.getClassLoader());
new text to the end of the corresponding ListVariable paragraph, so that
the entire thing looks like this:
protected final Class defineClass(String name, byte data[], int offset, int length)
protected final Class defineClass(String name, byte data[],
int offset, int length, ProtectionDomain pd) [filled-star dingbat]
Create a Class object from an array ... the bytes actually define
the desired class. We'll look more at protection domains in
Chapter 5; if you use the signature without a protection domain,
a default (system) domain will be provided for the class.
Changed "findLocalClass()" to "findClass()"
In the last paragraph with a paragraph tag, changed "these two
new methods" to "this new method"
Changed both ListVariableTerm paragraphs by replacing them with this new
paragraph (still a ListVariableTerm format) and changed the corresponding
ListVariable paragraph to:
protected final Class defineClass(String name, byte[] buf, int offset,
int length, CodeSource cs) [filled-star dingbat]
Define a class that is associated with the given code source. If
the code source is null, this method is the equivalent of the
defineClass() method in the base ClassLoader class. We'll defer
showing an example of this method to Chapter 5, when we discuss
code source objects.
protected CodeSource getCodeSource(URL url, Object signers[])
as well as the ListVariable paragraph immediately following it. Hence, the
top of page 48 now has the continuation of the paragraph that was
changed on page 47 and then the next regular paragraph:
showing an example of this method to Chapter 5, when we discuss
code source objects.
As our first example of a class loader, ...
Changed the line that reads
protected Class findLocalClass(String name) {
to
protected Class findClass(String name) {
changed the lines that read
CodeSource cs = getCodeSource(urlBase, null);
cl = defineClass(name, buf, 0, buf.length, cs, null);
to
cl = defineClass(name, buf, 0, buf.length, null);
Deleted all the following lines:
publc void checkPackageAccess(String name) {
SecurityManager sm = System.getSecurityManager();
if (sm != null)
sm.checkPackageAccess(name);
}
Changed the middle of that paragraph so that it reads:
... even in 1.2. In 1.2 if you want to make the check for package
access, you can do that by calling the checkPackageAccess() method
of the security manager in the same way that we called the
checkPackageDefinition() method, but that will only prevent you
from access classes that aren't found by the system class loader.
Alternately in 1.2, you can use the newInstance() method of the
URLClassLoader class which makes such a check, or you can override
the loadClass() method itself to provide such a check as we showed
earlier. In 1.1, of course, you ...
Changed "findLocalClass()" to "findClass()"
Changed the last sentence to begin
Note that classes loaders that are created by calling the
constructor of the URLClassLoader class do not make such a call;
Changed the line that reads
protected Class findLocalClass(String name) {
to
protected Class findClass(String name) {
Changed the line that reads
return defineClass(name, buf, 0, buf.length, cs, null);
to
return defineClass(name, buf, 0, buf.length, null);
Changed the lines that read
url[0] = new URL("");
url[1] = new URL("");
to
urls[0] = new URL("");
urls[1] = new URL("");
... approach we've outlined above. In addition, the present
implementation of the URLClassLoader will not work with multiple
HTTP-based URLs, so for the present, you must write your own class
loader to handle that case.
Deleted the line (line 5) reading
CodeSource cs;
Deleted the line (line 14) reading
cs = getCodeSource(urlBase, null);
Changed the line reading
protected Class findLocalClass(String name) {
to
protected Class findClass(String name) {
Change the line reading
cl = defineClass(name, buf, 0, buf.length, cs, null);
to
cl = defineClass(name, buf, 0, buf.length, null);
public final ClassLoader getParent() [filled-star dingbat]
Immediately preceeding the HeadB header (Loading resources), added the
following
[Paragraph]
This URL class loader is known as the system class loader, and it
may be retrieved via the following method:
[ListVariableTerm]
public static ClassLoader getSystemClassLoader() [filled-star dingbat]
[ListVariable]
Return the system class loader -- that is, the class loader that
was used to load the base application classes. If a security
manager is in place, you must have the getClassLoader runtime
permission to use this method (see Chapter 5).
[Paragraph]
Hence, to set up a delegation class loader, you can use this call:
[CodeIndent]
jrl = new JavaRunnerLoader(ClassLoader.getSystemClassLoader())
[Paragraph]
instead of the methods we showed earlier.
Changed all occurences of "getLocalResource" to "findResource" [in the
ListVariableTerm and the middle of the final paragraph].
Changed "getLocalResources" to "findResources".
[HeadB]
[Paragraph]
In 1.2, a new method exists in the ClassLoader class:
[ListVariableTerm]
protected String findLibrary(String libname) [filled-star dingbat]
[ListVariable]
Return the directory from which native libraries should be loaded.
[Paragraph]
This method is used by the System.loadLibrary() method to determine the
directory in which the hative library in question should be found. If
this method returns null (the default), then the native library must be
in one of the directories specified by either the java.library.path or
java.sys.library.path properties; typically, these properties are
set in a platform-specific way (e.g. from the LD_LIBRARY_PATH on
Solaris or the PATH on Windows). That mimics the behavior that
applies in 1.1 and earlier releases.
[Paragraph]
However, a 1.2 custom class loader can override that policy and require
that libraries be found in some application-define location. This
prevents a user for overriding the runtime environment to specify an
alternate location for that library, which offers a slight security
advantage. Note that if the user can write to the hardwired directory
where the library lives that this advantage no longer exists: the user
can simply overwrite the existing library instead of change an
environment variable to point to another library; the end result is the
same.
checkPermission(Permission p) Thread.stop() Stopping a thread could
corrupt state of the
virtual machine.
In 1.2, the Thread class also calls the checkPermission() method of
the security manager whenever the stop() method is called, since
stopping a thread is an inherently dangerous operation (which has
led the stop() to method become deprecated). For backward
compatibility, this permission is normally granted even to untrusted
classes, but an end-user may change her environment so that the
security manager throws an exception whenever the stop() method is
called.
public CodeSource(URL url, Certificate cers[]) [filled-star dingbat]
public final PublicKey[] getKeys()
Return a copy ... (or maliciously).
and replaced it with these two entries
public final Certificate[] getCertificates() [filled-star dingbat]
Return the array of certificates that were used to construct
this code source object. The original certificates are not
returned so that they cannot be modified accidentally (or
mailciously).
public boolean implies(CodeSource cs) [filled-star dingbat]
Determine if the code source implies the parameter according
to the rules of the Permission class (see later in this
chapter). One code source implies another if it contains all
the certificates of the parameter and if the URL of the
parameter is implied by the URL of the target.
Removed the 4th paragraph (the one immediately preceeding the
Permissions header) and replaced it with this paragraph:
In Chapter 3, we didn't bother to create code sources, which meant
that our classes were assigned to a default code source. For the
time being, we'll create code sources in an URL-based class
loader simply based on the URL that we used to construct the class
loader; these classes will all be unsigned classes as a result.
In Chapter 12, we'll show how you can get the correct
certificates with which to construct a code source object for a
signed class.
changed
RuntimePermission r1 = new RuntimePermission("exit");
RuntimePermission r2 = new RuntimePermission("package.access.*");
to
RuntimePermission r1 = new RuntimePermission("exitVM");
RuntimePermission r2 = new RuntimePermission("accessClassInPackage.java");
Changed the last clause of the first sentence to read:
"and r2 represents permission to access classes in the java package."
"topLevelWindow" to "showWindowWithoutWarningBanner"; changed
"systemClipboard" to "accessClipboard", and change "eventQueue"
to "accessEventQueue".
In the first code line, changed "topLevelWindow" to
"showWindowWithoutWarningBanner".
In the first sentence of the paragraph, changed "three different classes"
to "two different classes". In that paragraph, changed
"Authenticator.setDefault" to "setDefaultAuthenticator" and changed
"Authenticator.requestPasswordAuthentication" to
"requestPasswordAuthentication".
Deleted the second paragraph (the one beginning "In addition, the ability
to create multicast...") altogether.
Rewrote the third paragraph (the one that begins "Finally, the ability to
create a listener...") as follows:
In addition, this class encapsulates various URL-related permissions.
Permission to specify a stream handler in the URL class is named
specifyStreamHander.
Deleted the first code line:
NetPermission n1 = new NetPermission("multicast");
Changed the second code line to
NetPermission n1 = new NetPermission("requestPasswordAuthentication");
Changed the third sentence from
has one valid name: enableSubstitution. If granted, this permission
to
has two valid names: enableSubstitution and
enableSubclassImplementation. The first of these permissions
Then added the following sentence at the end of the paragraph:
The latter permission allows the ObjectInputStream and
ObjectOutputStream classes to be subclassed, which would
potentially override the readObject() and writeObject() methods.
Changed
single name (access)
to
single name (supressAccessChecks)
public abstract String getName()
It now reads:
public final String getName()
Changed line 13 from
if (addedAdmin && (adminMask & xyz.mask) != 0)
to
if (addedAdmin && (adminMask & xyz.mask) != xyz.mask)
[Paragraph]
If you implement your own PermissionCollection class, you must also
keep track of whether it has been marked as read-only. There are two
methods involved in this:
[ListVariableTerm]
public boolean isReadOnly() [filled-star dingbat]
[ListTerm]
Return an indication of whether the collection has been marked as
read-only.
[ListVariableTerm]
public void setReadOnly() [filled-star dingbat]
[ListTerm]
Set the collection to be read-only. Once the read-only flag has
been set, it cannot be unset: the collection will remain read-only
forever.
[Paragraph]
A permission collection is expected to throw a security exception from
its add() method if it has been marked as read-only. Note that the
read-only instance variable is private to the PermissionCollection
class, so subclasses will have to rely on the isReadOnly() method to
test its value.
This line of code used to read:
if (adminAdded && (adminMask & xyz.mask) != xyz.mask)
It now reads:
if (adminAdded && (adminMask & xyz.mask) == xyz.mask)
are inherited...") to the "The Policy Class" header.
change "Policy.getPolicy" to "getPolicy" and change "Policy.setPolicy"
to "setPolicy".
"getPermissions"
public abstract Permissions getPermissions(CodeSource cs) [dingbat]
"sun.security.provider.PolicyFile".
package".
"-Djava.security.policy, which must be used in conjunction with the
-Djava.security.manager option."
Changed the first command line to:
piccolo% java -Djava.security.manager
-Djava.security.policy=/globalfiles/java.policy Cat /etc/passwd
Changed the next two "-usepolicy" to "-Djava.security.manager" [both in
the ListText... paragraph between the first and second command line
examples.]
Changed the second command line to:
piccolo% java -Djava.security.manager
-Djava.security.policy==/globalfiles/java.policy Cat /etc/
passwd
Changed the paragraph beginning "Note that you may also specify..." to
"Note that you may also simply specify the -Djava.security.manager
flag with no additional arguments, in which case..."
Changed the third command line to:
piccolo% java -Djava.security.manager Cat /etc/passwd
"-Djava.security.policy"
brace, added the line:
permission java.lang.RuntimePermission "stopThread";
At the end of the 1st paragraph, added the sentence:
All other classes will also be able to call the stop() method on a
thread.
public ProtectionDomain(CodeSource cs, Permissions p) [dingbat]
to
public ProtectionDomain(CodeSource cs, PermissionCollection p) [dingbat]
First bullet:
o The defineClass() method accepts a protection domain as a
parameter. In this case, the given protection domain is assigned
to the class. This case is typically unused, since that method
exists only in the ClassLoader class and not in the
SecureClassLoader class.
Second bullet:
o. However, a secure class loader
(including, of course, a URL class loader)
has the option of overriding the getPermissions() method to
enhance the set of permissions that a particular class might
have. We'll see an example of this in Chapter 6, when we discuss
network permissions in the class loader.
Third bullet: changed "evaluate()" to "getPermissions()".
public Permissions getPermissions() [dingbat]
to
public PermissionCollection getPermissions() [dingbat]
In the associated ListVariable paragraph, change "permissions" to
"permission collection".
(117:) At the beginning of the fourth line, changed "Test" to
"AccessTest". In figure 5-2, change "Test.init()" to
"AccessTest.init()".
facility is possible with these two methods..." through the first two
paragraphs of the page 120. Replaced that section with this text:
[Paragraph]
That facility is possible with these two methods of the access
controller class:
[ListVariableTerm]
public static Object doPrivileged(PrivilegedAction pa) [filled-star
dingbat]
[ListVariableTerm]
public static Object doPrivileged(PrivilegedExceptionAction pae)
[filled-star dingbat]
[ListVariable]
Execute the run() method of the given object, temporarily
granting its permission to any protection domains that are below
it on the stack. In the second case, if the embedded run() method
throws an exception, the doPrivileged() method will throw a
PrivilegedActionException.
[Paragraph]
The PrivilegedAction and PrivilegedExceptionAction interfaces
contain a single method:
[ListVariableTerm]
public Object run() [filled-star dingbat]
[ListVariable]
Run the target code, which will have the permissions of the calling
class.
[Paragraph]
The different in interface.
[Paragraph]
The PrivilegedActionException class is a standard exception, so
you must always be prepared to catch it when using the
doPrivileged() method. If the embedded run() method does throw an
exception, that exception will be wrapped into the
PrivilegedActionException, where it may be retrieved with this
call:
[ListVariableTerm]
public Exception getException()
[ListVariable]
Return the exception that was originally thrown to cause the
PrivilegedActionException to be thrown.
[Paragraph]
Let's see how all of this might work with our network monitor
example:
[CodeIndent]
public class NetworkMonitor {
public NetworkMonitor() {
try {
class doSocket implements PrivilegedExceptionAction {
public Object run() throws UnknownHostException,
IOException {
return new Socket("net.xyz.com", 4000);
}
};
doSocket ds = new doSocket();
Socket s = (Socket) AccessContoller.doPrivileged(ds);
} catch (PrivilegedActionException pae) {
Exception e = pae.getException();
if (e instanceof UnknownHostException) {
// process host exception
}
else if (e instanceof IOException {
// process IOException
}
else {
// e must be a runtime exception
throw (RuntimeException) e;
}
}
}
}
[Paragraph]
Two points are noteworthy here. First, the code that needs to be
executed with the privileges of the NetworkMonitor class has been
encapsulated into a new class -- the inner doSocket() class.
[Paragraph]
Second, the exception handling is somewhat new: we must list the
exceptions that the socket constructor can throw in the run()
method of our embedded class. If either of those exceptions is
thrown, it will be encapsulated into a PrivilegedActionException
and thrown back to the network monitor, where we can retrieve the
actual exception with the getException() method.
[Continue with the paragraph reading "Let's examine the effect
these calls have" but change "these calls" to "this call". In that
paragraph, also change "beginPrivileged()" to "doPrivileged()".]
Changed the code example to the following:
public class PayrollApp {
NetworkMonitor nm;
public void init() {
class doInit implements PrivilegedAction {
public void run() {
nm = new NetworkMonitor();
}
}
doInit di = new doInit();
AccessContoller.doPrivileged(di);
}
}
[both occurances in the paragraph.]
should be in Courier font.
2nd paragraph now reads:
In 1.2, this variable and method are deprecated. The correct
operation to perform in a 1.2-based security manager is to place
the calls to the InetAddress class in a class that can be used by
the doPrivileged() method. In addition, the InetAddress class in
1.2 no longer calls the getInCheck() method.
support machine are trusted and other classes are not. Note that
if you are going to use this technique in 1.2 that it is quite
possible that the class loader will not be your multi loader -- it
might be one of the internal class loaders that is use to load
extension or API classes. In that case, instead of throwing a
security exception when the class cast fails, you should simply
call the super.checkWrite() method, which will do the correct
thing in 1.2.
At the beginning of the example, (as line 1), added the following
private ClassLoader getNonSystemClassLoader() {
Class c[] = getClassContext();
ClassLoader sys = ClassLoader.getSystemClassLoader();
for (int i = 1; i < c.length; i++) {
ClassLoader cl = c[i].getClassLoader();
if (cl != null && !cl.equals(sys))
return cl;
}
return null;
}
Changed line 9 from
ClassLoader loader = currentClassLoader();
to
// In 1.1, use currentClassLoader() instead
ClassLoader loader = getNonSystemClassLoader();
Changed lines 22 to 32 from
try {
AccessController.beginPrivileged();
...
} finally {
AccessController.endPrivileged();
}
to
try {
class testHost implements PrivilegedExceptionAction {
String local, remote;
testHost(String local, String remote) {
this.local = local;
this.remote = remote;
}
public Object run() throws UnknownHostException {
InetAddress hostAddr = InetAddress.getByName(local);
InetAddress remoteAddr = InetAddress.getByName(remote);
if (hostAddr.equals(remoteAddr))
return new Boolean("true");
return new Boolean("false");
}
}
testHost th = new testHost(host, remoteHost);
Boolean b = (Boolean) AccessController.doPrivileged(th);
if (b.booleanValue())
return;
} catch (PrivilegedActionException pae) {
// Must be an UnknownHostException; continue and throw
exception
}
2nd paragraph, the 2nd sentence now reads:
For a 1.1-based security manager, you would set the inCheck
variable to true, execute the calls that are in the run() method of
the testHost class, and then set inCheck to false. You would also
need to make this method and the getInCheck() methods
synchronized.
1st paragraph: changed the last sentence to read:
In Java 1.2, you can use the doPrivileged() method of the access
controller from within the class loader to attempt to open the
URL.
subsequent entire code example, and the first paragraph of 145 to be
[Paragraph]
This implementation requires us to override the getPermissions()
method of the SecureClassLoader class as follows:
[CodeIndent]
protected PermissionCollection getPermissions(CodeSource cs) {
if (!cs.equals(this.cs))
return null;
Policy pol = Policy.getPolicy();
PermissionCollection pc = pol.getPermissions(cs);
pc.add(new SocketPermission(urlBase.getHost(), "connect");
return pc;
}
[Paragraph]
As long as we use the correct code source to define the class,
then when the class loader resolves its permissions, the
appropriate socket permission will be added to its user-defined
set of permissions.
Changed the sentence
In 1.2, the default behavior of the security manager is to implement
the model we'll describe in this section.
To
In 1.2, the default behavior of the security manager is to allow the
checkAccess() method to succeed in all cases unless the target
thread is a member of the system thread group or the target thread
group is the system thread group. In those cases,
the program must have been granted a runtime
permission of modifyThread or modifyThreadGroup (depending on which
checkAccess() method is involved) for the operation to succeed.
Hence, any thread can modify any thread or thread group except for
those that belong to the system thread group.
In the 3rd paragraph, deleted the phrase "this is a model implemented by
the SecurityManager class in 1.2.". So the paragraph reads
We'll show an example that implements a hierarchical notion of
thread permissions which fits well within the notion of the virtual
machine's thread hierarchy (see Figure 6-1). In ths model, a ...
In the last paragraph, changed the last sentence to read
The 1.2 default security manager checks for the modifyThread and
modifyThreadGroup permissions as we described above.
Immediately preceeding the "Implementing Package Access" header, added
this paragraph:
Finally, remeber that in 1.2, the stop() method of the Thread class
first calls the checkPermission() class of the security manager to
see if the current stack has a runtime permission of "stopThread".
For backward compatibility, all protection domains have that
permission by default, but a particular user may be changed that
in the policy file.
page 149) to read
A final area for which the default security manager is sometime
inadequate is the manner in which it checks for package access and
definition. In 1.1, the default security manager rejects all package
access and definition attempts.
In 1.2, the situation is somewhat complex. For package access, the
security manager looks for a property defined in the java.security
file named package.access. This property is a list of comma
separated package names for which access should be checked. If the
class loader uses the checkPackageAccess() method (and remember that
many do not) and attempts to access a package that is in the list
specified in the java.security file, then the program must have a
runtime permission with a name of
accessClassInPackage.<package name>. For defining a class, the
operation is similar; the property name in the java.security file is
package.definition and the appropriate runtime permission has a name
of defineClassInPackage.<package name>. This model works well, but
it requires that the java.security file and all the java.policy
files be co-ordinated in their attempts to protect package access
and definition.
For that reason, and also to provide a better migration between
releases (and because it's the only way to do it in 1.1), you may
want to include the logic to process some policies within your new
security manager. In that way, users will not need to make any
changes on their system; in this case, the user will not have to put
the appropriate RuntimePermission entries into the java.policy files
by hand.
Changed the entire example to read
public void checkPackageAccess(String pkg) {
// In 1.1, don't call the super class, which will automatically
// reject the operation
super.checkPackageAccess(pkg);
}
Changed
checkAccess() [both signature] RuntimePermission("thread")
to these two entries
checkAccess(Thread t) RuntimePermission("modifyThread");
checkAcecss(ThreadGroup tg) RuntimePermission("modifyThreadGrou
p");
Changed
checkExit(int status) RuntimePermission("exit")
to
checkExit(int status) RuntimePermission("exitVM")
Changed
checkRead(FileDescriptor fd) RuntimePermission("fileDescriptor.read"
)
to
checkRead(FileDescriptor fd) RuntimePermission("readFileDescriptor")
Changed
checkWrite(FileDescriptor fd) RuntimePermission("fileDescriptor.write
")
to
checkWrite(FileDescriptor fd) RuntimePermission("writeFileDescriptor"
)
Changed
checkMulticast() [both signatures] NetPermission("multicast")
to
checkMulticast() [both signatures] SocketPermission(maddr.getHostAddress
(), "accept,connect")
Changed
checkTopLevelWindow(Object w) AWTPermission("topLevelWindow")
to
checkTopLevelWindow(Object w) AWTPermission("showWindowWithoutWar
ningBanner")
Changed
checkPrintJobAccess() RuntimePermission("print.queueJob")
to
checkPrintJobAccess() RuntimePermission("queuePrintJob")
Changed
checkSystemClipboardAccess() AWTPermission("systemClipboard");
to
checkSystemClipboardAccess() AWTPermission("accessClipboard");
Changed
checkAwtEventQueueAccess() AWTPermission("eventQueue");
to
checkAwtEventQueueAccess() AWTPermission("accessEventQueue");
Changed
checkPackageAccess(String pkg) RuntimePermission("package.access."
+ pkg)
checkPackageDefinition(String pkg) RuntimePermission("package.define."
+ pkg)
to
checkPackageAccess(String pkg) RuntimePermission("accessClassInPac
kage." + pkg)
checkPackageDefinition(String pkg) RuntimePermission("defineClassInPac
kage." + pkg)
Changed
checkMemberAccess(Class c, int which) RuntimePermission("reflect.declar
ed." + c.getName())
to
checkMemberAccess(Class c, int which) RuntimePermission("accessDeclared
o The checkAccess() methods only check for the given permission if
the target thread (group) is in the system thread group.
o Thread permissions may follow the thread hierarchy rather than the
default all-or-nothing policy.
public void checkAccess(Thread t) {
.. follow implementation given above ..
}
public void checkAccess(ThreadGroup tg) {
.. follow implementation given above ..
}
"-Djava.security.policy" [three instances.]
and the rest of the section through the next header ("The Secure Java
Launcher").
At the end of the 1st paragraph, change "checkExit() method" to
"checkExit(), checkPackageAccess(), and checkPackageDefinition()
methods".
[First we must prove that the author ...]
KeyStore JKS
CertificateFactory X509
SecureRandom SHA1PRNG
"method" to "methods". [contain a number of useful methods we'll
review here;]
added the following sentence:
In 1.2, however, the argument string is clearProviderProperties,
putProviderProperty, and removeProviderProperty, respectively.
hollow-star dingbat.
Deleted the footnote.
KeyStore.JKS
CertificateFactory.X509
Alg.Alias.CertificateFactory.X.509
SecureRandom.SHA1PRNG
entries:
insertProvider. + provider.getName()
removeProvider. + provider.getName()
- not called -
- not called -
getProperty. + key
setProperty. + key
classes in 1.2,]. Changed "six" to "nine" [nine core Java engine
classes].
In Table 8-5, added the following three rows:
KeyStore KeyStoreSpi KeyStoreSpi
CertificateFactory CertificateFactorySpi CertificateFactorySpi
SecureRandom SecureRandomSpi SecureRandomSpi
now reads "Fortunately, the next class allows us to import
certificates." [delete "both" and "to export".]
inserted the following new section:
[HeadB]
The CertificateFactory class
[Paragraph]
If you need to import a certificate into a program, you do so by using
the CertificateFactory class (java.security.cert.CertificateFactory).
That class is an engine class, and it has the following interface:
[ListVariableTerm]
public static CertificateFactory getInstance(String type) [filled-star
dingbat]
[ListVariableTerm]
public static CertificateFactory getInstance(String type, String provider)
[filled-star dingbat]
[ListVariable]
Return a certificate factory that may be used to import
certificates of the specified type (optionally implemented by the
given provider). A CertificateException will be thrown if the
given factory cannot be found or created; if the given provider is
not found, a NoSuchProviderException will be thrown. The default
Sun security provider has one certificate factory that works with
certificates to type X509.
[ListVariableTerm]
public String getProvider() [filled-star dingbat]
[ListTerm]
Return the provider that implemented this factory.
[ListVariableTerm]
public String getType() [filled-star dingbat]
[ListTerm]
Return the type of certificates that this factory can import.
[ListVariableTerm]
public final Certificate generateCertificate(InputStream is) [filled-star
dingbat]
[ListTerm]
Return a certificate that has been read in from the specified
input steram. For the default Sun security provider, the input
stream must be an X509 certificate in RFC 1421 format (that is, a
DER-encoded certificate that has been translated into 7-bit ASCII
characters); this is the most common format for transmission of
X509 certificates.
[ListVariableTerm]
public final Collection generateCertificates(InputStream is) [filled-star
dingbat]
[ListTerm]
Return a collection of certificates that have been defined in the
given input stream. For the default Sun provider, the input stream
in this case may have a single RFC 1421 formatted certificate, or
it may contain a certificate chain in PKCS#7 format.
[ListVariableTerm]
public final CRL generateCRL(InputStream is) [filled-star dingbat]
[ListTerm]
Define a certificate revocation list from the data in the input
stream.
[ListVariableTerm]
public final Collection generateCRLs(InputStream is) [filled-star dingbat]
[ListTerm]
Define a collection of CRLs from the data in the input stream.
[Paragraph]
Note that the CertificateFactory class cannot generate a new
certificate -- it may only import a certificate found in a file. This
is one reason why its hard to provide a certificate authority based
solely on the standard Java API. In the next section, we'll see an
example of reading a certificate through this interface.
[Paragraph]
The CertificateFactory is an engine class, so it has a companion SPI
class -- the CertificateFactorySpi class -- that can be used if you
want to implement your own certificate factory. Implementing such a
class follows the familiar rules of engine classes: you must define a
constructor that takes the type name as a parameter and then for each
of the public methods listed above, you must implement a corresponding
engine method with the same parameters. Certificates are complicated
things, and parsing their encoding is a complicated procedure, so we
won't bother showing an example of the engine class.
an engine class..."] and all the text until the last paragraph on the
page that introduces the three bullets. Hence, the text reads
public abstract class X509Certificate extends Certificate implements
X509Extension
Provide an infrastructure to support X509 version 3 formatted
certificates.
An X509 certificate has a number of properties...
Changed the 5th line from
X509Certificate c = X509Certificate.getInstance(fr);
to
CertificateFactory cf = CertificateFactory.getInstance("X509");
X509Certificate c = (X509Certificate) cf.generateCertificate(fr);
Certificates".
of the third paragraph and the next ListVariableTerm as follows:
Revoked certificates themselves are represented by the X509CRLEntry
class (java.security.cert.X509CRLEntry):
public abstract class X509CRLEntry implements X509Extension [dingbat]
X509CRL class...") and the code at the bottom of the page.
definition. Changed the beginning of the next paragraph so that it
reads
Instances of the X509CRLEntry class are obtained by the getInstance()
method of the CertificateFactory. Once the class has been
instantiated, ...
Hence, the text from 237 to 238 reads:
public abstract class X509CRL implements X509Extension [dingbat]
Provide the support for an X509-based certificate revocation list.
Instances of the X509CRLEntry class are obtained by the getInstance()
method ...
that reads
public abstract boolean isRevoked(BigInteger serialNumber) [dingbat]
Indicated whether or not ...
"X509CRLEntry":
public abstract X509CRLEntry getRevokedCertificate(BigInteger bn)
[dingbat]
done,..."), inserted the following:
[Paragraph]
There is one more method of the X509CRL class, which it inherits from
its superclass, the CRL class (java.security.cert.CRL):
[ListVariableTerm]
public abstract boolean isRevoked(Certificate c) [filled-star dingbat]
[ListVariable]
Indicate whether or not the given certificate has been revoked by
this CRL.
When all is said and done,...
Changed line 5 from
c = X509Certificate.getInstance(data);
to
CertificateFactory cf = CertificateFactory.getInstance("X509");
ByteArrayInputStream bais = new ByteArrayInputStream(data);
c = (X509Certificate) cf.generateCertificate(bais);
Changed lines 10-12 from
X509CRL crl = X509CRL.getInstance(crlFile);
BigInteger bi = c.getSerialNumber();
if (crl.isRevoked(bi))
to
cf = CertificateFactory.getInstance("X509CRL");
X509CRL crl = (X509CRL) cf.generateCRL(crlFile);
if (crl.isRevoked(c))
Deleted lines 9-10
} catch (X509ExtensionException xee) {
// treat as no crl
Last paragraph, changed the final sentence to:
The IdentityScope class has been deprecated in 1.2.
4th paragraph: Changed the final sentence to read:
First, however, let's take a brief look at the notion of the identity
to whom a key belongs.
Then added this paragraph.
In Java's key management model, the association between a key and
its owner is application specific. There is an Identity class in
Java that was used for this purpose in 1.2, but it has been deprecated
(because, among other things, it used the wrong Certificate class).
However, there is still one interface that can be useful in your
own applications that use keys: the Principal interface.
Then deleted the Identities header and the next two paragraphs. Most of
the "Identities" section must be moved into Appendix B; changes in that
section will be listed later. For now, after the paragraph that was just
added, keep the "Principals" subsection (make sure that the sidebar is
attached to something in that section). Deleted the entire subsection
"The Identity Class", and also the subsection "signers". Note that we're
deleting an entire HeadA section, so the cross-reference at the
beginning of the chapter must have that entry removed as well.
Hence, the text flows as follows:
This framework is the ultimate goal of this chapter. First, however,
let's take a brief look at the notion of the identity to whom a key
belongs.
In Java's key management model, the association bewteen ... that use
keys: the Principal interface.
[Still a HeadB]
Principals
Classes that are concerned with identities...
... discussion of principals, ending with the 7th paragraph on
page 246 ...
There are other methods listed in the Principal interface -- namely
... implemented correctly for all your classes, not just those that
implement the Principal interface.
[HeadA -- from page 253]
The KeyStore Class
"the deprecated Identity class" [This paragraph is in the middle of the
flow that we just discussed].
public class KeyStore [dingbat]
interface...") paragraph through the ListVariableTerm for the
getInstance() method (plus the corresponding ListVariable paragraph).
Replaced them with the following:
[paragraph]
The KeyStore class is an engine class; there is a corresponding
KeyStoreSpi class that you can use to write your own keystore (more
about that a little later). By default, the Sun security provider
implements a keystore called JKS (for Java KeyStore). Hence,
instances of the KeyStore class are predictable obtained via this
method:
[ListVariableTerm]
public static final KeyStore getInstance(String type) [filled-star
dingbat]
[ListVariableTerm]
public static final KeyStore getInstance(String type, String provider)
[filled-star dingbat]
[ListVariable]
Return an instance of the KeyStore class that implements the
given algorithm, supplied by the given provider, if applicable.
In the Sun security provider, the default algorithm name is
"JKS".
[paragraph]
If you do not want to hard-wire the name of the keystore algorithm
into your application, you may use this method to return the string
that should be passed to the getInstance() method:
[ListVariableTerm]
public static final String getDefaultType() [filled-star dingbat]
[ListVariable]
Return the default keystore algorithm for the enviroment. This
value is obtained by looking for a property called keystore.type
in the java.security file.
Then continued with the paragraph "When the keystore object is
created..."
public abstract void load(InputStream is, String password) [dingbat]
to
public final void load(InputStream is, char[] password) [dingbat]
public abstract void store(OutputStream os, String password) [dingbat]
to
public final void store(OututStream os, char[] password) [dingbat]
these 2 entries (so that there will be 7 ListVariableTerm entries all
together):
public final String getType() [filled-star dingbat]
Return the name of the algorithm that this keystore implements.
public final String getProvider() [filled-star dingbat]
Return the name of the provider that supplied this keystore
implementation.
In the remaining ListVariableTerm entries on this page (5 at the top and
2 at the bottom), changed the word "abstract" to "final".
of entries...") delete the 2nd sentence. The paragraph now reads
The keystore holds two types of entries: certificate entries and key
entries. A certificate entry is an entry that contains...
public abstract PrivateKey getPrivateKey(String alias, String password
[dingbat]
to
public final Key getKey(String alias, char[] password) [dingbat]
Changed the final ListVariableTerm paragraphs from
public abstract void setKeyEntry(String alias, byte privateKey[],
Certificate chain[]) [dingbat]
public abstract void setKeyEntry(String alias, PrivateKey pk, String
password, Certificate chain[]) [dingbat]
to
public final void setKeyEntry(String alias, byte key[], Certificate
chain[]) [dingbat]
public final void setKeyEntry(String alias, Key k, char[] password,
Certificate chain[]) [dingbat]
In all other ListVariableTerm paragraphs on this page, changed "abstract"
to "final". This applies as well to the 1st ListVariableTerm paragraph
on page 259.
Changed line 4 from
KeyStore ks = KeyStore.getInstance();
to
KeyStore ks = KeyStore.getInstance(KeyStore.getDefaultType());
Changed lines 12-13 from
System.out.println("The private key for " + args[0] +
" is " + ks.getPrivateKey(args[0], args[1]));
to
char c[] = new char[args[1].length()];
args[1].getChars(0, c.length, c, 0);
System.out.println("The private key for " + args[0] +
" is " + ks.getKey(args[0], c));
Implementing a keystore requires that we write a KeyStoreSpi class,
just as any other engine class. For most methods in the KeyStore
class, there is a corresponding abstract engine method in the
KeyStoreSpi class that you must provide an implementation for. A
complete list of these methods is given in Table 11-?.[simple table,
2 columns]
[Table Title]
Table 11-?. Engine methods in the KeyStoreSpi class
[Cell Heading]
KeyStore Class KeyStoreSpi class
[Cell Body]
aliases engineAliases
containsAlias engineContainsAlias
deleteEntry engineDeleteEntry
getCertificate engineGetCertificate
getCertificateAlias engineGetCertificateAlias
getCertificateChain engineGetCertificateChain
getCreationDate engineGetCreationDate
getKey engineGetKey
isCertificateEntry engineIsCertificateEntry
isKeyEntry engineIsKeyEntry
load engineLoad
setCertificateEntry engineSetCertificateEntry
setKeyEntry engineSetKeyEntry
size engineSize
store engineStore
Changed
public Date getCreationDate(String alias) {
to
public Date engineGetCreationDate(String alias) {
were changed by prepending the word engine to them and capitalizing
the next letter (the first one changes name as well):
engineGetKey()
engineGetCertificateChain()
engineGetCertificate()
engineGetCreationDate()
engineAliases()
engineContainsAlias()
engineSize()
engineIsKeyEntry()
engineIsCertificateEntry()
engineGetCertificateAlias()
Changed the 1st and 2nd lines from
public void setKeyEntry(String alias, PrivateKey pk,
String passphrase, Certificate chain[])
to
public void engineSetKeyEntry(String alias, Key key
char[] passphrase, Certificate chain[])
above):
engineSetKeyEntry()
engineSetCertificateEntry()
engineDeleteEntry()
engineStore()
Changed the first line from
public void load(InputStream is, String password)
to
public void engineLoad(InputStream is, char[] password)
Changed the first line from
public int size()
to
public int engineSize()
combined the next paragraph into the 1st paragraph. In sum, the start of
this section ("Installing a KeyStore class") reads like this:
In order to use an alternate keystore implementation, you must install
your new class into a security provider. If necessary, you'll need to
establish a convention...
to the final entry on the page:
public final byte[] sign()
public final int sign(byte[] outbuf, int offset, int len) [filled-star
dingbat]
Create the digital signature...
In the first of these methods, the signature is returned from
the the method. Otherwise, the signature is stored into the
outbuf array at the given offset, and the length of the
signature is returned. If the output buffer is too small to hold
the data, an IllegalArgumentException will be thrown.
Changed line 11 from
KeyStore ks = KeyStore.getInstance();
to
KeyStore ks = KeyStore.getInstance(KeyStore.getDefaultType());
Changed line 15 from
PrivateKey pk = ke.getPrivateKey(args[0], args[1]);
to
char c[] = new char[args[1].length()];
args[1].getChars(0, c.length, c, 0);
PrivateKey pk = (PrivateKey) ks.getKey(args[0], c);
Changed line 25 from
KeyStore ks = KeyStore.getInstance();
to
KeyStore ks = KeyStore.getInstance(KeyStore.getDefaultType());
Changed lines 20-22 from
X509Certificate x509 = X509Certificate.getInstance(
new ByteArrayInputStream(b));
certificate = x509;
to
CertificateFactory cf = CertificateFactory.getInstance("X509");
certificate = cf.generateCertificate(new ByteArrayInputStream(b));
Changed lines 6-9 from
KeyStore ks = KeyStore.getInstance();
ks.load(new FileInputStream(
System.getProperty("user.home") +
File.separator + ".keystore"), args[1]);
to
KeyStore ks = KeyStore.getInstance(KeyStore.getDefaultType());
char c[] = new char[args[1].length()];
args[1].getChars(0, c.length, c, 0);
ks.load(new FileInputStream(
System.getProperty("user.home") +
File.separator + ".keystore"), c);
Changed line 12 from
PrivateKey pk = ks.getPrivateKey(args[0], args[1]);
to
PrivateKey pk = (PrivateKey) ks.getKey(args[0], c);
Changed line 2 from
ks = KeyStore.getInstance();
to
ks = KeyStore.getInstance(KeyStore.getDefaultType());
Changed line 10-27 from
String signer = ks.getCertificateAlias(c);
if (signer != null) {
...
break;
}
}
to
try {
String signer = ks.getCertificateAlias(c);
if (signer != null) {
System.out.println("We know the signer as " + signer);
return;
}
for (Enumeration alias = ks.aliases(); alias.hasMoreElements(); ) {
String s = (String) alias.nextElement();
try {
sCert = (X509Certificate) ks.getCertificate(s);
} catch (Exception e) {
continue;
}
if (name.equals(sCert.getSubjectDN().getName())) {
issuerCert = sCert;
break;
}
}
} catch (KeyStoreException kse) {
throw new CertificateException("Invalid keystore");
}
Changed line 18 from
protected Class findLocalClass(String name) {
to
protected Class findClass(String name) {
Changed lines 32-35 (keep in bold) from
Object ids[] = (Object []) classIds.get(urlName);
CodeSource cs = getCodeSource(urlBase, ids);
cl = defineClass(name, buf, 0, buf.length, cs, ids);
to
Certificate ids[] = (Certificate) classIds.get(urlName);
CodeSource cs = new CodeSource(urlBase, ids);
cl = defineClass(name, buf, 0, buf.length, cs);
Deleted line 40
AccessController.beginPrivileged();
Changed lines 47-47 from
CodeSource cs = getCodeSource(urlBase, null);
cl = defineClass(name, buf, 0, buf.length, cs, null);
to
CodeSource cs = new CodeSource(urlBase, null);
cl = defineClass(name, buf, 0, buf.length, cs);
Deleted lines 3-4
} finally {
AccessController.endPrivileged();
Changed lines 6-8 (keep in bold) from
Object ids[] = je.getIdentities();
if (ids != null)
classIds.put(className, ids);
to
Certificate c[] = je.getCertificates();
if (c == null)
c = new Certificate[0];
classIds.put(className, c);
the entry looks like this:
protected abstract byte[] engineSign()
protected int engineSign(byte[] outbuf, int offset, int len) [filled-sta r dingbat]
Create the signature based on the ...
2 release for JDK 1.2 of"
Changed 1st sentence of the 3rd paragraph to read:
There are five new engine classes in the JCE: the Cipher, KeyAgreement,
KeyGenerator, Mac, and SecretKeyFactory engines.
In that paragraph, changed the 3rd sentence to read
In addition to implementations of the new engines, the SunJCE
security provider gives us a key factory and a key pair generator
for Diffie-Hellman (DH) keys as well as a new engine for working
with keystores.
In Table 13-1, added the following entries
Mac HmacSHA1
Mac HmacMD5
KeyStore JCEKS
public interface RSAPrivateKey extends PrivateKey
public interface RSAPrivateKeyCrt extends PrivateKey
public interface RSAPublicKey extends PublicKey
This set of interfaces ...
Note that the footnote has been deleted as well.
public final void init(int strength)
public final void init(int strength, SecureRandom sr)
public final void engineInit(int strength, SecureRandom sr)
public class DESParamaterSpec implements AlgorithmParameterSpec
but kept the DESKeySpec and the associated text. In the ListVariable
paragraph, changed "These classes" to "This class".
In the next ListVariable paragraph, deleted the sentence "Note that
there is no corresponding parameter specification for this algorithm."
public class RSAPrivaeKeySpec implements KeySpec
public class RSAPrivateKeyCrtSpec implements KeySpec
public class RSAPublicKeySpec implements KeySpec
These classes implement ...
with the following set of entries
public class IvParameterSpec implements AlgorithmParameterSpec
This class implements an initialization vector. Initialization
vectors are used in many algorithms; notable in DES.
public class RC2ParameterSpec implements AlgorithmParameterSpec
public class RC5ParameterSpec implements AlgorithmParameterSpec
These classes implement the algorithm parameter specifications
for RC2 and RC5 encryption.
public class SecretKeySpec implements KeySpec
This class implements a key specification for the new class of
secret keys.
In the 2nd entry, changed "DESParameterSpec" to "IvParameterSpec" in
columns 1 and 3.
Replaced the final 3 entries (about RSA keys) with the following:
RC2ParameterSpec byte[] getIV() RC2ParameterSpec(int effective)
int getEffectiveKeyBits() RC2ParameterSpec(int effective,
byte[] iv)
RC2ParameterSpec(int effective,
byte[] iv, int offset)
RC5ParameterSpec byte[] getIV() RC5ParameterSpec(int version,
int rounds, int wordSize)
int getRounds() RC5ParameterSpec(int version,
int rounds, int wordSize,
byte[] iv)
int getVersion() RC5ParameterSpec(int version,
int rounds, int wordSize,
byte[] iv, int offset)
int getWordSize()
SecretKeySpec byte[] getEncoded() SecretKeySpec(byte[] key,
String Algorithm)
SecretKeySpec(byte[] key,
int offset, String Algorithm)
public final void init(int op, Key k, AlgorithmParameters ap)
public final void init(int op, Key k, AlgorithmParameters ap,
SecureRandom sr)
parameter specification" to "algorithm parameter specification or
algorithm parameters". Changed "DESParameterSpec" to "IvParameterSpec".
The whole last sentence reads
In these cases, the intialization vector must be passed to the
init() method within the algorithm parameter specification or
algorithm parameters; the IvParameterSpec class is typically used
to do this for DES encryption.
Changed line 6 from
DESParameterSpec dps = new DESParameterSpec(iv);
to
IvParameterSpec dps = new IvParameterSpec(iv);
alternate choice to using an initialization vector...).
to "it":
The rationale behind this system is that it allows the ...
header", the first word is now "As".
public void engineInit(int op, Key key, AlgorithmParameters ap,
SecureRandom sr)
Before line 18
public byte[] engineUpdate(byte in[], int off, int len) {
added the following:
public void engineInit(int i, Key k, AlgorithmParameters ap,
SecureRandom sr) throws InvalidKeyException,
InvalidAlgorithmParameterException {
throw new InvalidAlgorithmParameterException(
"Algorithm parameters not supported in this class");
}
Deleted line 3
oos.close();
Deleted line 11
pw.print("XXXXXXXX");
Immediately after line 13
pw.close();
added (same indentation level)
oos.writeObject(c.getIV());
oos.close();
this example, we've sent 8 arbitrary...")
Deleted line 9
ois.close();
Changed line 12 from
c.init(Cipher.DECRYPT_MODE, key);
to
c.init(Cipher.DECRYPT_MODE, key, new IvParameterSpec((byte [])
ois.readObject());
changed the other three entries, and add a sentence to the end of the
ListVariable entry so that they appear as follows:
public final void init(Key k)
public final void init(Key k, SecureRandom sr)
public final void init(Key k, AlgorithmParameterSpec aps)
public final void init(Key k, AlgorithmParameterSpec aps, SecureRandom
sr)
Initialize the key agreement engine. The parameter specifications
(if present) will vary depending upon the underlying algorithm; if
the parameters are invalid, of the incorrect class, or not
supported, an InvalidAlgorithmParameterException is generated. This
method will also perform the first phase of the key agreement
protocol.
to:
public final Key doPhase(Key key, boolean final)
Execute the next phase of the key agreement protocol. Key
agreement protocols usually require a set of operations to be
performed in a particular order. Each operation is represented
in this class by a particular phase, which usually require a
key to succeed. If the provided key is not supported by the key
agreement protocol, is incorrect for the current phase, or is
otherwise invalid an InvalidKeyException will be thrown.
The number of phases, along with the types of keys they
require, vary drastically from key exchange algorithm to
algorithm. Your security provider must document the types of
keys required for each phase. In addition, you must specify
which is the final phase of the protocol.
In the next set of entries, in the 2nd ListVariable paragraph, changed
"starting with a new set of calls to the doPhase() method" to
"starting with a new call to the init() method".
Changed line 18 from
ka.doPhase(1, kp.getPrivate());
to
ka.init(kp.getPrivate());
Changed line 28 from
ka.doPhase(2, pk);
to
ka.doPhase(pk, true);
Changed line 19 from
ka.doPhase(1, kp.getPrivate());
to
ka.init(kp.getPrivate());
Changed line 27 from
ka.doPhase(2, pk);
to
ka.doPhase(pk, true);
ka.doPhase(1, k
for testing."
entries
-storetype type
Specify the type of keystore that the keytool should operate
on. This defaults to the keystore type in the java.security
file, which defaults to JKS, the keystore type provided by the
Sun security provider.
-storepass entry:
-storetype storetype
-trustcacerts
Use the cacerts file to obtain trusted certificates from
certificate authorities that have signed the
certificate that is being imported.
-storepass entry:
-storetype storetype
[This happens twice on the page: once at the very top and once at the
very bottom.]
-certreq
Generate a certificate signing request...
piccolo% keytool -certreq -alias sdo -file sdoCSR.cer
added the following:
We've mentioned in this section that in order to import a
certificate like this that the self-signed certificate of the
certificate authority must already be in the keystore. However,
there's a bootstrapping issue involved in this: how do you get the
initial certificates for the certificate authorities into a
keystore?
The JDK comes with a set of five pre-installed certificates: four
from VeriSign, which issues certificates at different levels, and 1
from RSA Data, Inc. These certificates are in the cacerts file in
the ${JAVAHOME}/lib/security directory. While those certificates
are not present in your .keystore file, you can still import
certificates into your .keystore file by using the -trustcacerts
option: in that case, as long as the certificate you're importing
has been signed by one of the authorities in the cacerts file,
the import operation will succeed.
Hence, if we'd sent our CSR request in the above example to
VeriSign and the returned certificate from VeriSign was stored in
the sdo.cer file, we could import it with this command:
piccolo% keytool -import -file sdo.cer -alias sdo -trustcacerts
If you want to use the certificates of the certificate authorities
programatically, you may do so by creating a keystore of type JKS,
and loading that keystore from the cacerts file.
-storepass entry:
-storetype storetype
[This happens twice on this page]
-storepass entry:
-storetype storetype
[This happens three times on this page]
subsection:
[HeadB]
Importing a 1.1-based Identity database
[Paragraph]
The keystore in 1.2 is incompatible with the identity database in
1.1, but the keytool is capable of converting between the two. To
convert a 1.1 identity database to a 1.2 keystore, use this
command:
[ListVariableTerm, in Italics]
-identitydb
[ListVariable]
Convert a 1.1 identity database. This command has the following
options
[>ListVariableTerm, in Italics]
-v
-keystore keystore
-keypass keypass
-storepass storepass
-storetype storetype
-file db_file
[>ListVariable]
The file name of the 1.1 identity database. The default for
this is identitydb.obj in the user's home directory.
[paragraph]
With this command, each trusted entry in the identity database will
be created as a key entry in the keystore. All other entries in the
identity database will be ignored.
[ListVariableEntry in italics]
A KeyStore type
[ListVariable]
You must have an entry in this file that lists the default type
of keystore that an application should use. By default, that
type is listed as
[CodeIndent]
keystore.type=jks
[ListVariable]
If you change the type listed in this entry, the new type will
be used whenever anyone requests the default keystore
implementation.
from
policy.provider=java.security.PolicyFile
to
policy.provider=sun.security.provider.PolicyFile
Changed "-usepolicy" to "-Djava.security.policy"
[two occurences].
1st code line now reads:
-Djava.security.policy=/globals/java.policy
2nd code line now reads
-Djava.security.policy==/globals/java.policy
In the next ListParagraph, changed the sentence
"The -usepolicy argument also initializes a security manager for
the application, so you may use it by itself if you want to use
only the files listed in the java.security file"
to
"The -Djava.security.policy argument must be used in conjunction
with the -Djava.security.manager; if you want to use only the
files listed in the java.security file, specify
-Djava.security.manager without -Djava.security.policy.". argument,
which initializes a security manager..."
changed "-usepolicy" to "-Djava.security.policy" [two occurences].
associated paragraphs. The last item before the next subsection "The
java.policy File" now reads "Whether or not the -Djava.security.policy
argument can be used."
begins "As a result, it is not possible to upgrade...").
Changed the last sentence of the 2nd paragraph (the one that begins
"In particular, although the Identity class...") to read:
In addition, the Identity and IdentityScope classes have been
deprecated in 1.2, so you should really move to the keystore
implementation as soon as possible.
The section on Identities from Chapter 11, pages 245-253, was moved
into this appendix. It now starts before the section on "Identity
Scopes", so that after the first two introductory paragraphs, there is
the "Identity" header, the text from chapter 11, and then the "Identity
Scopes" header.
In addition, the text that was moved into this appendix now has some
changes as well. These are listed here with reference to their original
page number:
(245) Deleted the 2nd paragraph of the section (the one beginning
"The classes we'll examine in this section were originally
designed..."). The next subsection -- the one on Principals --
remained as part of chapter 11 (including the sidebar on page 247).
Hence, after the 1 introductory paragraph of this section, the
next thing is the subsection entitled "The Identity Class".
(246) Last ListVariableTerm ("public class Identity...") -- append
a hollow-star dingbat to the end of the line.
(247) Deleted the last paragraph (the one that begins "In 1.1, the
Identity class is an abstract class...").
(248) 4th bullet: Deleted the sentence that begins "This features is
primarily used to support the javakey...".
(248) 5th bullet: Deleted the extra punctuation at the end of the
sentence.
(248) Deleted the sidebar "Identities and Identity Scopes"
(248) Last paragraph: deleted the last clause of the last sentence,
so that the last sentence reads "You're free to add that feature
to your own identity class."
(249) At the end of all ListVariableTerm entries, appended a
hollow-star dingbat.
(249) 7th ListVariableTerm entry: deleted the ListVariableTerm paragraph
public void addCertificate(java.security.cert.Certificate
certificate)
The other ListVariableTerm (that uses java.security.Certificate)
remains, as does the ListVariable paragraph.
(249) 8th ListVariableTerm entry: deleted the ListVariableTerm paragraph
public void removeCertificate(java.security.cert.Certificate
certificate)
The other ListVariableTerm (that uses java.security.Certificate)
remains, as does the ListVariable paragraph.
(250) 1st ListVariableTerm entry: deleted the ListVariableTerm paragraph
public java.security.cert.Certificate[] getCertificates()
The other ListVariableTerm (that uses java.security.Certificate)
remains, as does the ListVariable paragraph.
(250) Changed the first paragraph (that begins "If you have an identity
object") to be this single sentence:
There are two ways to obtain an identity object -- via the
getIdentity() method of the IdentityScope class or by
implementing and constructing an instance of your own subclass
of the Identity class.
(251) Deleted the first ListVariableTerm entry and its associated
ListVariable paragraph.
(251) In the first paragraph of the subsection "The Identity class
and the security manager", deleted the first clause of the 2nd
sentence ("This mechanism has changed somewhat between 1.1. and
1.2"):
... performed by untrusted classes. Table B-1 lists the
methods...
In the table (now Table B-1), removed the 2nd column entirely so
that there is only a column for Method and one for Argument in 1.1.
But change the column heading to say simply "Argument".
In the next paragraph, deleted the first two words "In 1.1" and
delete the final sentence (the one that begins "For example, in 1.1
a call to the...").
(252) 1st full paragraph (the one that begins "In common
implementations..."): changed this paragraph to the single sentence:
In common implementations of the security manager, this string
is ignored and trusted classes are typically able to work with
identities while untrusted classes are not.
(253) Table (now Table B-2): Remove the 2nd column entirely, and
changed the heading of the last column to "Parameter".
In the next paragraph, deleted the words "in 1.1".
After this section, continued to the section "Identity Scopes".
appended a hollow-star dingbat
begins "Changes in the definition of the Identity class between...")
section, delete the phrase 'in 1.1 and an argument of
"IdentityScopre.setSystemScope" in 1.2'.
In the next paragraph, deleted the two words "in 1.1".
phrase "we touched upon in Chapter 11".
one that begins "In the realm of Java 1.2...").
an additional bug reported in July 1998 regarding the class loader, but
this applied only to Netscape's implementation, not to the standard
JDK."
Replaced the lines
public static native void beginPrivileged();
public static native void beginPrivileged(AccessControlContext);
with
public static native Object doPrivileged(PrivilegedAction);
public static native Object doPrivileged(PrivilegedAction,
AccessControlContext);
public static native Object doPrivileged(PrivilegedExceptionAction);
public static native Object doPrivileged(PrivilegedExceptionAction,
AccessControlContext);
Deleted the line
public static native void endPrivileged();
Changed the lines
// Instance Methods
protected AlgorithmParameters(AlgorithmParametersSpi, Provider, String)
;
to the lines
// Constructors
protected AlgorithmParameters(AlgorithmParametersSpi, Provider, String)
;
// Instance Methods
Changed the line
public CodeSource(URL, PublicKey[]);
to
public CodeSource(URL, Certificate[]);
Changed the line
public final PublicKey[] getKeys();
to
public final Certificate[] getCertificates();
public boolean implies();
Deleted the line
public Identity(String, String, Certificate[], PublicKey);
Changed the line
public Certificate[] getCertificates();
to
public Certificate[] certificates();
After the line
public void initialize(int);
Added the lines
public void initialize(int, SecureRandom);
public void initialize(AlgorithmParameterSpec, SecureRandom);
Changed the line
public KeyStore();
to
protected KeyStore(KeyStoreSpi, Provider, String);
Changed the line
public static final KeyStore getInstance();
to
public static final String getDefaultType();
public static KeyStore getInstance(String);
public static KeyStore getInstance(String, String);
Changed the line
public abstract PrivateKey getPrivateKey(String, String);
to
public final Key getKey(String, char[]);
public final Provider getProvider();
public final String getType();
Changed the line
public abstract void load(InputStream, String);
to
public final void load(InputStream, char[]);
Changed the lines
public abstract void setKeyEntry(String, PrivateKey, String,
Certificate[]);
to
public final void setKeyEntry(String, Key, char[], Certificate[]);
Changed the line
public abstract void store(OutputStream, char[])
to
public final void store(OutputStream, char[])
Also, in every other line of this entry that has the word "abstract",
changed "abstract" to "final"
After the line
public abstract boolean implies(Permission);
added the lines
public boolean isReadOnly();
public void setReadOnly();
Deleted the lines
public boolean isReadOnly();
public void setReadOnly();
Changed the line
public abstract Permissions evaluate(CodeSource);
to
public abstract PermissionCollection getPermissions(CodeSource);
Changed the line
public ProtectionDomain(CodeSource, Permissions);
to
public ProtectionDomain(CodeSource, PermissionCollection);
Changed the line
public final Permissions getPermissions();
to
public final PermissionCollection getPermissions();
Changed all the lines after
// Instance Methods
to
public synchronized void clear();
public Set entrySet();
public String getInfo();
public String getName();
public double getVersion();
public Set keySet();
public synchronized void load(InputStream);
public synchronized Object put(Object, Object);
public synchronized void putAll(Map);
public synchronized Object remove(Object);
public String toString();
public Collection values();
}
Changed all the lines after
// Protected Instance Methods
to
protected final Class defineClass(String, byte[], int, int,
CodeSource);
protected PermissionCollection getPermissions(CodeSource);
}
After the line
public final byte[] sign();
added the line
public final int sign(byte[], int, int);
After the line
public final byte[] engineSign();
added the line
public final int engineSign(byte[], int, int);
Changed the line
public UnresolvedPermission(String, String, String, PublicKey);
to
public UnresolvedPermission(String, String, String, Certificate[]);
entry:
Class java.security.cert.CertificateFactory
A certificate factory is used to import certificates or certificate
revocation lists from a file or other input stream.
Class Definition
public java.security.cert.CertificateFactory
extends java.lang.Object {
// Constructors
protected CertificateFactory(CertificateFactorySpi, Provider,
String);
// Class Methods
public static final CertificateFactory getInstance(String);
public static final CertificateFactory getInstance(String,
String);
// Instance Methods
public final CRL generateCRL(InputStream);
public final Collection generateCRLs(InputStream);
public final Certificate generateCertificate(InputStream);
public final Collection generateCertificates(InputStream);
public final Provider getProvider();
public final String getType();
}
See also: X509Certificate, X509CRLEntry
First, the name of this entry has been changed to
java.security.cert.X509CRLEntry (and it moved to its correct place in
alphabetical order, which is after X509CRL).
Then, after the line
public abstract boolean hasExtensions();
added the line
public abstract boolean hasUnsupportedCriticalExtension();
Removed the lines
// Class Methods
public static final X509Certificate getInstance(InputStream);
public static final X509Certificate getInstance(byte[]);
Removed the line
public abstract boolean hasUnsupportedCriticalExtension();
Removed the lines
// Class Methods
public static final X509CRL getInstance(InputStream);
public static final X509CRL getInstance(byte[]);
Changed the lines
public abstract RevokedCertificate
getRevokedCertificate(BigInteger);
to
public abtract X509CRLEntry getRevokedCertificate(BigInteger);
After the line
public abstract int getVersion();
added the line
public abstract boolean hasUnsupportedCriticalExtension();
In the See Also section, changed "RevokedCertificate" to "X509CRLEntry".
After the line
public abstract Set getNonCriticalExtensionOIDs();
added the line
public abstract boolean hasUnsupportedCriticalExtension();
In the See Also section, changed "RevokedCertificate" to "X509CRLEntry".
After the line
public final int getOutputSize(int);
added the line
public final AlgorithmParameters getParameters();
After the lines
public final void init(int, Key, AlgorithmParameterSpec,
SecureRandom);
added the lines
public final void init(int, Key, AlgorithmParameters);
public final void init(int, Key, AlgorithmParameters, SecureRandom);
After the line
protected abstract byte[] engineGetIV();
added the line
protected abstract int engineGetOutputSize(int);
After the lines
protected abstract void engineInit(int, Key,
AlgorithmParameterSpec, SecureRandom);
added the line
protected abstract void engineInit(int, Key, AlgorithmParameters,
SecureRandom);
Changed the line
public final Key doPhase(int, Key);
to
public final Key doPhase(Key, boolean);
Changed the lines
public final void init(SecureRandom);
public final void init(AlgorithmParameterSpec);
public final void init(AlgorithmParameterSpec, SecureRandom);
to
public final void init(Key);
public final void init(Key, SecureRandom);
public final void init(Key, AlgorithmParameterSpec);
public final void init(Key, AlgorithmParameterSpec, SecureRandom);
Changed the line
protected abstract Key engineDoPhase(int, Key);
to
protected abstract Key engineDoPhase(Key, boolean);
Changed the lines
protected abstract void engineInit(SecureRandom);
protected abstract void engineInit(AlgorithmParameterSpec,
SecureRandom);
to
protected abstract void engineInit(Key, SecureRandom);
protected abstract void engineInit(Key, AlgorithmParameterSpec,
SecureRandom);
After the line
public final Provider getProvider();
added the lines
public final void init(int);
public final void init(int, SecureRandom);
After the line
protected abstract SecretKey engineGenerateKey();
added the line
protected abstract void engineInit(int, SecureRandom);
After the line
public final Object getObject(Cipher);
added the lines
public final Object getObject(Key);
public final Object getObject(Key, String);
This section was renamed to java.security.interfaces.RSAPrivateKey,
and the first line of its interface definition now reads
public abstract interface java.security.interfaces.RSAPrivateKey
It has been moved to the appropriate section on
java.security.interfaces, immediately following DSAPublicKey.
Deleted this entry entirely.
This section was renamed to java.security.interfaces.RSAPublicKey,
and the first line of its interface definition now reads
public abstract interface java.security.interfaces.RSAPublicKey
It was moved to the appropriate section on
java.security.interfaces, immediately following RSAPrivateKey.
Before the line (left a blank line before this)
// Constructors
added the lines
// Constants
public static final int DES_KEY_LEN;
After the line
public static boolean isParityAdjusted(byte[], int);
added the line
public static boolean isWeak(byte[], int);
This entry was renamed to javax.crypto.spec.IvParameterSpec and
its first line now reads
public javax.crypto.spec.IvParameterSpec
Changed the lines
public DESParameterSpec(byte[]);
public DESParameterSpec(byte[], int, int);
to
public IvParameterSpec(byte[]);
public IvParameterSpec(byte[], int, int);
Re-alphabetized this entry so that it follows DHPublicKeySpec
Before the line (left a blank line before this)
// Constructors
added the lines
// Constants
public static final int DES_EDE_KEY_LEN;
This entry was renamed to java.security.spec.RSAPrivateKeySpec and
its first line was changed to
public java.security.spec.RSAPrivateKeySpec
In the See also list, removed RSAPrivateKeyCrtSpec
It was moved into the section on java.security.spec, immediately
following the PKCS8EncodedKeySpec entry.
This entry was renamed to java.security.spec.RSAPublicKeySpec and
its first line was changed to
public java.security.spec.RSAPublicKeySpec
It was moved into the section on java.security.spec, immediately
following the RSAPrivateKeySpec entry.
After the line
// Class Methods
added
public static ClassLoader getSystemClassLoader();
Deleted the lines
public URL getLocalResource(String);
public Enumeration getLocalResources(String);
Deleted the line
public void checkPackageAccess(String);
After the line
protected final Class defineClass(byte[], int int);
added the line
protected final Class defineClass(String, byte[], int, int,
ProtectionDomain);
After the line
protected Package definePackage(String, String, String, String, String
String, String, URL);
added the lines
protected Class findClass(String);
protected String findLibrary(String);
After the line
public URLClassLoader(URL[], ClassLoader);
added the lines
public URLClassLoader(URL[]);
public URLClassLoader(URL[], ClassLoader, URLStreamHandlerFactory);
Changed the lines
public static URL fileToURL(File);
public static URL[] pathToURLs(String);
to
public static URLClassLoader newInstance(URL[]);
public static URLClassLoader newInstance(URL[], ClassLoader);
Changed the lines
public URL getLocalResource(String);
public Enumeration getLocalResources(String);
public void invokeClass(String, String[]);
public void setListener(URLClassLoader$Listener);
to
public URL findResource(String);
public Enumeration findResources(String);
public URL[] getURLs();
Changed the lines
protected void checkPackageDefinition(String);
protected Class defineClass(String, Resource);
protected Package definePackage(String, Attributes, URL);
protected Class findLocalClass(String);
to
protected void addURL(URL);
protected Package definePackage(String, Manifest, URL);
protected Class findClass(String);
protected PermissionCollection getPermissions(CodeSource);
Deleted all the lines after
// Instance Methods
except for the closing brace
After the line
public static Class loadClass(String);
added the line
public static Class loadClass(String, String);
© 2015, O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
|
http://www.oreilly.com/catalog/errata.csp?isbn=9781565924031
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
XML Schema Q: Element with either simpleContent or complexContent
Discussion in 'XML' started by David Norm30
- Random
- Nov 19, 2004
- Replies:
- 1
- Views:
- 1,803
- Markus
- Nov 23, 2005
How to enforce a pattern on complexContent?Swaroop Kumar, Aug 19, 2003, in forum: XML
- Replies:
- 0
- Views:
- 450
- Swaroop Kumar
- Aug 19, 2003
[XML Schema] Including a schema document with absent target namespace to a schema with specified tarStanimir Stamenkov, Apr 22, 2005, in forum: XML
- Replies:
- 3
- Views:
- 1,444
- Stanimir Stamenkov
- Apr 25, 2005
Schema: error with restriction in simpleContentKai Schlamp, Oct 31, 2007, in forum: XML
- Replies:
- 3
- Views:
- 620
- Pavel Lepin
- Nov 1, 2007
|
http://www.thecodingforums.com/threads/xml-schema-q-element-with-either-simplecontent-or-complexcontent.167572/
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
GLX extended servers make a subset of their visuals available for OpenGL rendering. Drawables created with these visuals can also be rendered using the core X renderer and with the renderer of any other X extension that is compatible with all core X visuals.
GLX extends drawables with several buffers other than the standard color buffer. These buffers include back and auxiliary color buffers, a depth buffer, a stencil buffer, and a color accumulation buffer. Some or all are included in each X visual that supports OpenGL.
To render using OpenGL into an X drawable, you must first choose a visual that defines the required OpenGL buffers. glXChooseVisual can be used to simplify selecting a compatible visual. If more control of the selection process is required, use XGetVisualInfo and glXGetConfig to select among all the available visuals.
Use the selected visual to create both a GLX context and an X drawable. GLX contexts are created with glXCreateContext, and drawables are created with either XCreateWindow or glXCreateGLXPixmap. Finally, bind the context and the drawable together using glXMakeCurrent. This context/drawable pair becomes the current context and current drawable, and it is used by all OpenGL commands until glXMakeCurrent is called with different arguments.
Both core X and OpenGL commands can be used to operate on the current drawable. The X and OpenGL command streams are not synchronized, however, except at explicitly created boundaries generated by calling glXWaitGL, glXWaitX, XSync, and glFlush.
#include <GL/glx.h> #include <GL/gl.h> #include <unistd.h>
static int attributeListSgl[]
= { GLX_RGBA,
GLX_RED_SIZE, 1,
GLX_GREEN_SIZE, 1,
GLX_BLUE_SIZE, 1,
None };
static int attributeListDbl[] = { GLX_RGBA,
GLX_DOUBLE_BUFFER,
GLX_RED_SIZE, 1,
GLX_GREEN_SIZE, 1,
GLX_BLUE_SIZE, 1,
None };
static Bool WaitForNotify(Display *d, XEvent *e, char *arg) { return
(e->type == MapNotify) && (e->xmap.window == (Window)arg);
}
int main(int argc, char **argv) { Display *dpy;
XVisualInfo *vi;
Colormap cmap;
XSetWindowAttributes swa;
Window win;
GLXContext cx;
XEvent event;
int swap_flag = FALSE;
dpy = XOpenDisplay(0);
vi = glXChooseVisual(dpy, DefaultScreen(dpy), attributeListSgl);
if (vi == NULL) {
vi = glXChooseVisual(dpy, DefaultScreen(dpy), attributeListDbl);
swap_flag = TRUE;
}
cx = glXCreateContext(dpy, vi, 0, GL_TRUE);
cmap = XCreateColormap(dpy, RootWindow(dpy, vi->screen),
vi->visual, AllocNone);
swa.colormap = cmap;
swa.border_pixel = 0;
swa.event_mask = StructureNotifyMask;
win = XCreateWindow(dpy, RootWindow(dpy, vi->screen), 0, 0, 100, 100,
0, vi->depth, InputOutput, vi->visual,
CWBorderPixel|CWColormap|CWEventMask, &swa);
XMapWindow(dpy, win);
XIfEvent(dpy, &event, WaitForNotify, (char*)win);
glXMakeCurrent(dpy, win, cx);
glClearColor(1,1,0,1);
glClear(GL_COLOR_BUFFER_BIT);
glFlush();
if (swap_flag) glXSwapBuffers(dpy,win);
|
http://xfree86.org/4.3.0/glXIntro.3.html
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
So I've got a form and it's has some accepts_nested_attributes_for items in it.
It's a project, task thing and what I'm wondering is if I can put the person creating the project as the leader (default first value for project_member) in the form itself or if this is something that should be handled on the create method of the controller.
Here's the new action on the project controller
def new
@project = Project.new
3.times {@project.tasks.build}
3.times {@project.project_members.build}
3.times {@project.milestones.build}
respond_to do |format|
format.html {render :layout => "base"}
format.js {render :layout => false}
end
end
So basically, when building the project_members field, I'm wanting to be able to grab the person creating the form (based on session[:user]) and insert them as the first field in the list.
Any help or pointers to some kind of tutorial is REALLY appreciated.
I would definitely do that in the create method, not in the form itself. Just use whatever current_user method your authentication system is using.
that's what I was thinking was the best idea.
Thanks.
|
http://community.sitepoint.com/t/inserting-default-value-for-first-nested-builder-item-in-form/62090
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
Hello everyone,
I created a program that check's if the input is a palindrome, if it is, it returns true, else it is false.
( I must use recursion)
This is my code:
str="abaa"
def is_palindrome(str):
n=len(str)
if n==0:
return True
if str[n-1]==str[-n]:
is_palindrome(str[-(n)+1:n-1])
return True
else:
return False
Now, the problem is, that for some reason, it does not work properly. I checked it on python tutor, and for some reason unknown to me, it made all the inputs as true.
I thank you very much for your help.
|
http://forums.devshed.com/python-programming-11/help-python-recursion-954076.html
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
/* * */ /* * YP password update protocol * Requires unix authentication */ #ifndef RPC_HDR %#ifndef lint %/*static char sccsid[] = "from: @(#)yppasswd.x 1.1 87/04/13 Copyr 1987 Sun Micro";*/ %/*static char sccsid[] = "from: @(#)yppasswd.x 2.1 88/08/01 4.0 RPCSRC";*/ %static char rcsid[] = "$Id: yppasswd.x,v 1.2 2000/03/05 02:04:45 wsanchez Exp $"; %#endif /* not lint */ #endif program YPPASSWDPROG { version YPPASSWDVERS { /* * Update my passwd entry */ int YPPASSWDPROC_UPDATE(yppasswd) = 1; } = 1; } = 100009; struct x_passwd { string pw_name<>; /* username */ string pw_passwd<>; /* encrypted password */ int pw_uid; /* user id */ int pw_gid; /* group id */ string pw_gecos<>; /* in real life name */ string pw_dir<>; /* home directory */ string pw_shell<>; /* default shell */ }; struct yppasswd { string oldpass<>; /* unencrypted old password */ x_passwd newpw; /* new passwd entry */ };
|
http://opensource.apple.com/source/Librpcsvc/Librpcsvc-11/yppasswd.x
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
JBoss ESB 4.2 Milestone Release 3
Trailblazer Guide
JBESB 3.
This guide is most relevant to engineers who are responsible for using JBoss ESB 4.2 Milestone Release 3 installations and want to know how deploy and tes the Trailblazer found under the Samples.
You will need the JBossESB distribution, source or binary to run the trailblazer. You will also need an instance of JBoss Application Server installed with JBossWS and EJB3 support. You can use the App. Server installer from JBoss and install using the EJB3 Profile.
To test the email notification of quotes, you will require a mail server or the information from your ISP/company email server.
This guide contains the following chapters:
Chapter 1, Overview: an overview of the loanbroker trailblazer scenario used in JBossESB.
Chapter 2, In Depth Look: a more detailed look at the various artifacts that make up the trailblazer.
Chapter 3, Deploying and Testing the TB: how to compile, deploy, and test the trailblazer.
The following conventions are used in this guide:
Table 1 Formatting Conventions
In addition to this guide, the following guides are available in the JBoss ESB 4.2 Milestone Release 3 documentation set:
JBoss ESB 4.2 Milestone Release 3 Administration Guide: How to manage the ESB..
Chapter 1
Trailblazer
The Trailblazer is meant to show a commonly understood use-case where the JBossESB can be used to solve the integration problem at hand. The TB is loosely based on the Enterprise Applications Integration book (). The scenario is very simple - a user is shopping around for a bank loan with the best terms, rate, etc. A loan broker will act as middle-man between the user and the banks. The LoanBroker will gather all the required information from the user, and the pass it on to each bank. As the quotes are received from the various banks, the LoanBroker will pass those back to the requesting user. This is a common practice in the financial services world today – it's a model used for insurance quotes, mortgage quotes, and so on.
A simple scenario as described above, actually puts forth several integration challenges. Each bank has it's own data feed structure (xml, delimited, positional, etc), it's own communication protocol (file, jms, ftp, etc), and finally the responses from each of these is very unique to each. A LoanBroker acting as the agent for these institutions must be able to accommodate each scenario, without expecting the bank to adjust anything. The bank's provide a service, and have a clearly defined contract in which to carry out that service. It's our job as the LoanBroker developer to ensure we can be as flexible and adaptable as possible to handle a variety of possible communication protocols, data formats and so on.
This is where JBossESB comes in. Traditionally, an organization would create custom code and scripts to manage the end to end integration between the LoanBroker and each bank. (aka point-to-point interfaces). This is cumbersome, and messy when it comes to maintenance. Adding new banks, and new protocols is not easy. JBossESB gives us a central framework for developing a solution built around a common set of services which can be applied over and over to each unique bank requirement. Adding a new bank then becomes trivial, and support is a lot simpler when you only need to work on one common codebase.
The diagram below shows the scenario at a high level:
* the diagram above is not using any specific notation or style (some of you might be expecting the EIP symbols).
Chapter 2
In Depth Look
The client is a simple JSP page, which routes the submit to a waiting web service. The Loan Request consists of the typical information you would expect from such a request: A social security number (ssn), some personal information like name, address, and so on, as well as loan specific information – loan amount, etc.
The web service, which is responsible for receiving the loan requests is a JSR-181 based annotated web service. An annotated web service let's you take any pojo and expose the methods as being capable of receiving requests. The class looks as follows:
package
org.JBoss.soa.esb.samples.trailblazer.web;
import javax.jws.WebMethod;
import javax.jws.WebService;
import javax.jws.soap.SOAPBinding;
import org.apache.log4j.Logger;
import org.JBoss.soa.esb.samples.trailblazer.loanbroker.LoanBroker;
/**
* The Loan broker web service, which will handle a loan request.
*/
@WebService(name = "LoanBrokerWS",
targetNamespace = "")
@SOAPBinding(style = SOAPBinding.Style.RPC)
public class LoanBrokerWS
{
private static Logger logger = Logger.getLogger(LoanBrokerWS.class);
@WebMethod
// method name is .NET friendly
public void RequestLoan(WebCustomer customer) { logger.info("WebCustomer received: \n" + customer);
LoanBroker broker = new LoanBroker(); broker.processLoanRequest(customer);
}
}
JSR-181 annotated pojo web services are a very easy and powerful way to expose plain old java classes as web services. The JBossESB does not have built in support for web services yet, but since we are working in Java, there is no reason why you cannot combine your own web services with the JBossESB services, which is what was done in the trailblazer. The class above is the server side web service. You still need to provide the client, the JSP in this case the client stubs to communicate with the web service. JBossIDE which has a web service client side generator to create these classes if you are looking for a tool to use for this.
The most important piece in the web service, is the line which invokes the LoanBroker object, and passes a customer object for processing.
The Loan Broker is a standard java application, which makes use of the services available in the JBossESB to get data to and from the banks, and then finally back to the customer as an email response.
Let's look first at the ESB components required for processing a loan request.
In this release, the Bank bundled include a JMS based bank, and a File based bank. Each has it's own unique data requirements and data formats. These are external entities. In a real production world scenario, these might be internal systems, accessible within your own network, or they may be external providers which you will need to communicate with through some protocol. Needless to say, for this example, we are not focusing on aspects like security, authentication, and other concerns which you would most certainly face. We are focusing solely on the JBossESB components and some sample configurations which you could use to create a similar scenario.
JBossESB has a concept of “Gateway” and “ESB Aware” services. ESB Aware services are able to communicate with the ESB directly using native APIs found in the ESB. These APIs for instance require that you use a Message object. Since the LoanBroker is java based, and has access to the JBossESB APIs, it will be an ESB Aware service. The banks on the other hand, are NON-ESB Aware services. They have no idea, nor should they know anything about the ESB. It is the job of the services in the ESB to facilitate communication to and from the banks, as well as data transformation to/from and so on. These services (the Banks) will interact with the JBossESB through what we call Gateway Services. To read more on the differences between the two, please see the Programmer's Guide.
Let's look at just how you configure the various services in JBossESB. Inside the <TRAILBLAZER_ROOT>/esb/conf/jbossesb.xml you will see the following deployed services:
<?xml
version =
"1.0" encoding
=
"UTF-8"?>
<JBossesb
xmlns=""
parameterReloadSecs="50">
<providers>
<jms-provider
<jms-bus
<jms-message-filter
</jms-bus>
<jms-bus
<jms-message-filter
</jms-bus>
<jms-bus
<jms-message-filter
</jms-bus>
</jms-provider>
</providers>
<services>
<service category="trailblazer" name="creditagency" description="Credit Agency Service">
<listeners>
<jms-listener
</listeners>
<actions>
<action
xlass="org.JBoss.soa.esb.samples.trailblazer.actions.CreditAgencyActions" process="processCreditRequest" name="fido"> </action>
</actions>
</service>
>
</services>
</JBossesb>
The config above uses a configuration structure which is described in much more detail in Chapter 5 of the JBossESB Programmer's Guide. The config for the TB describes several communication providers, listed in the <providers> section, all consisting of JMS in this example, and using JBossMQ as the actual JMS transport. Next, several <services> are listed, starting with the creditagency, and the various JMS bank services for sending and receiving data from the banks. The banks have their own config files, which must be configured to use and reply on the queues described above. Please see <TRAILBLAZER_ROOT>/banks/bank.properties.
The LoanBroker makes use of the services described above, in the following lines of code:
public
void
processLoanRequest(WebCustomer wCustomer){
Customer customer = getCustomer(wCustomer);
//keep the customer in a file someplace for later use, if needed CustomerMasterFile.addCustomer(String.valueOf(customer.ssn), customer); //step 1 - send to credit agency for credit score if available //uses 2way courier for a response
sendToCreditAgency(customer);
//step 2 - send to JMS Bank
sendToJMSBank(customer);
}
The sendToCreditAgency is where an interaction with the ESB takes place. Please see the code for more detailed listing. The sections below illustrate the important parts:
courier.setReplyToEpr(replyEPR);
//wait for 5 seconds then give up
replyMessage = courier.pickup(5000);
We set the courier's ReplyToEpr with an EPR we create, then we tell the courier to pickup the response for us, waiting a maximum of 5 seconds. For more detailed information on how Couriers and 2WayCourier's work, please see the Programmer's Guide.
The interaction with the Banks uses a simpler, asynchronous API – there is no waiting for a reply from the banks. The bank replies come in on their own queue, and the GatewayService defined for that purpose fires it off to an action class to handle the response. See the listing from the jbossesb.xml:
>
The important element above is, that the org.JBoss.soa.esb.samples.trailblazer.actions.BankResponseActions is the class that is defined as being responsible for handling the bank JMS responses. The property process=”processResponseFromJMSBank” tells the service which method in this class will actually do the work. Below is a code snippet from this method:
public
Message processResponseFromJMSBank(Message message) throws
Exception {
_message = message;
_logger.debug("message received: \n" + new String(message.getBody().getContents()));
//get the response from the bank and set it in the customer
ConfigTree tree = ConfigTree.fromXml(new String(message.getBody().getContents()));
String quoteID = tree.getFirstTextChild("quoteId");
String rate = tree.getFirstTextChild("interestRate");
String errorCode = tree.getFirstTextChild("errorCode");
String ssn = tree.getFirstTextChild("customerUID");
String email = tree.getFirstTextChild("customerEmail");
ProcessEmail procEmail = new ProcessEmail(email, quoteID, rate, errorCode, ssn);
procEmail.sendEmail();
return message;
}
The code above retrieves the contents of the payload from the Message.getBody().getContents(). Those contents are then used to populate some strings, which are eventually used to fill in the email which goes back to the customer.
The sequence diagram below illustrates the full set of calls that are made in the trailblaz-
|
http://docs.jboss.org/jbossesb/docs/4.2MR3/manuals/html/TBGuide.html
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
EMC, customers stung by hard drive shortages from Thailand floods
Headlines: EMC informs its partners and customers that it will increase list prices of its hard drives starting Jan. 1 as a result of the shortages caused by Thailand’s floods.
Storage channel news roundup for Dec. 15 to Dec. 21,.
Read the full blog post on the impact of Thailand’s floods on EMC and its customers.
Seagate-Samsung close hard drive deal seven months later
One of two pending multibillion-dollar hard drive vendor acquisitions closed this week when Seagate Technology wrapped up its $1.4 billion transaction with Samsung Electronics.
Seagate is acquiring Samsung’s M8 product line of 2.5-inch high-capacity hard drives and will supply disk drives to Samsung for PCs, notebooks and consumer devices. Samsung will supply Seagate with chips for enterprise solid-state drives (SSDs). Seagate already uses those chips for its SSD and hybrid drives. Seagate is also acquiring employees from Samsung’s Korea design center.
Check out this tip on enterprise hard drives and newer drive technologies.
ASG Software expands into backup, archiving with Atempo
Backup and archiving software vendor Atempo gave up the fight to survive as an independent niche player among giants last week, after ASG Software Solutions purchased the company as an addition to ASG’s portfolio of IT management software.
ASG did not disclose the price it paid for Atempo, but ASG founder and CEO Arthur Allen said his company will continue to offer Atempo’s backup, recovery and archiving products software as standalone products while also integrating them into ASG’s enterprise automation management (EAMS) software suite.
Find out about developing online backup and archiving services in this story.
HDS sharpens file capabilities for the cloud
Hitachi Data Systems added a few more lanes to its cloud storage on-ramp last week.
HDS brought out the Hitachi Data Ingestor (HDI) caching appliance a year ago, calling it an “on-ramp to the cloud” for use with its Hitachi Content Platform (HCP) object storage system. Last week it added content sharing, file restore and NAS migration capabilities to the appliance.
Content sharing lets customers in remote offices share data across a network of HDI systems, as all of the systems can read from a single HCP namespace. File restore lets users retrieve previous versions of files and deleted files, and the NAS migration lets customers move data from NetApp NAS filers and Windows servers to HDI.
Get a peer’s advice on cloud storage services in this podcast.
Top 10 storage trends in 2011: Solid-state technology, cloud, ‘big data’ make a mark
Solid-state technology dominated the data storage headlines in 2011, even if the technology has yet to become widely adopted. Cloud storage, as well as storage and backup for virtual machines (VMs), also played a prominent role in the news this year.
See the full news stories associated with the top storage trends of 2011.
Additional storage news
Dig Deeper
PRO+
Content
Find more PRO+ content and other member only offers, here.
E-Handbook
Mastering the art of managed communications
E-Handbook
E-Handbook
Jumpstart your cloud practice
|
http://searchitchannel.techtarget.com/news/2240112934/EMC-customers-stung-by-hard-drive-shortages-from-Thailand-floods
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
The pipes-network-tls package
Use TLS-secured network connections together with the pipes ecosystem.
This package is organized using the following namespaces:
Pipes.Network.TCP.TLS exports pipes and utilities for using TLS-secured TCP connections in a streaming fashion.
Pipes.Network.TCP.TLS.Safe subsumes Pipes.Network.TCP.TLS, exporting pipes and functions that allow you to safely establish new TCP connections within a pipeline using the pipes-safe facilities. You only need to use this module if you want to acquire and release operating system resources within a pipeline.
See the changelog file in the source distribution to learn about any important changes between version.
Properties
Modules
- Pipes
Downloads
- pipes-network-tls-0.2.1.tar.gz [browse] (Cabal source package)
- Package description (included in the package)
Maintainers' corner
For package maintainers and hackage trustees
|
http://hackage.haskell.org/package/pipes-network-tls-0.2.1
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
* Right if you "remove" then the DTD puts them back ;)
Pesky DTDs.... ;o)
Thanks for that Thomas, it's much clearer now.
I didn't appreciate that the spec defines that those attributes should
be included. I had just got as far as experimenting and finding that an
SVG file without those attributes included 'worked', but is obviously
not compliant SVG.
Thanks again guys for the help, cheers,
Dylan
________________________________
From: thomas.deweese@kodak.com [mailto:thomas.deweese@kodak.com]
Sent: 11 April 2008 12:11
To: batik-users@xmlgraphics.apache.org
Cc: batik-users@xmlgraphics.apache.org
Subject: Re: Attrbutes Generated For USE tag
Hi Dylan,
"Dylan Browne" <dbrowne@mango-solutions.com> wrote on 04/07/2008
05:31:00 AM:
> I have a question about the attributes that are added into my "use"
> tag when I generate it. Basically I am attempting to 'streamline' my
> SVG to reduce file size, so any attributes that are not required I'd
> like to strip out.
So the attributes you are concerned about are added because the DTD
says they should be added. You can detect them on output by checking
'getSpecified' on the Attribute Node (it will return false for these).
> I was wondering if there was a way I could remove the inclusion of
> the xlink namespace etc, as these are defined in the root of my SVG
document?
Not really the SVG specification says they must be present in
the DOM.
> I've tried doing a removeAttributeNS but either this is not possible
> or I am attempting to reference them by the wrong namespace. I've
> tried variations on something like this:
Right if you "remove" then the DTD puts them back ;)
We should probably filter these attributes on output in
our various writhing utilities, however I've been reluctant to
do that since they shouldn't hurt anything downstream
(although they do increase file size).
|
http://mail-archives.apache.org/mod_mbox/xmlgraphics-batik-users/200804.mbox/%3C3CBFCFB1FEFFA841BA83ADF2F2A9C6FA191A1D@mango-data1.Mango.local%3E
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
Preface
This article is part 2 of the series "Publish a modern JavaScript (or TypeScript) library". Check out the motivation and links to other parts in the introduction.
Why Babel and how should you use it in a library?
If you are not interested in the background and reasoning behind the setup, jump directly to the conclusion
Babel can transpile JavaScript as well as TypeScript. I would argue that it's even better to use Babel instead of the TypeScript compiler for compiling the code (down) to compatible JavaScript because it is faster. What Babel does when it compiles TypeScript is it just discards everything that isn't JavaScript. Babel does no type checking. Which we don't need at this point.
To use Babel you have to install it first: Run
npm install -D @babel/core @babel/cli @babel/preset-env. This will install the core files, the preset you are going to need always and the command line interface so that you can run Babel in your terminal. Additionally, you should install
@babel/preset-typescript and/or
@babel/preset-react, both according to your needs. I will explain in a bit what each of is used for but you can imagine from their names in which situations you need them.
So, setup time! Babel is configured via a configuration file. (For details and special cases see the documentation.) The project-wide configuration file should be
babel.config.js. It looks at least very similar to this one:
module.exports = { presets: [ [ '@babel/env', { modules: false, } ], '@babel/preset-typescript', '@babel/preset-react' ], plugins: [ [ '@babel/plugin-transform-runtime', { corejs: 3 } ] ], env: { test: { presets: ['@babel/env'] } } };
Let's go through it because there are a few assumptions used in this config which we will need for other features in our list.
module.exports = {…}
The file is treated as a CommonJS module and is expected to return a configuration object. It is possible to export a function instead but we'll stick to the static object here. For the function version look into the docs.
presets
Presets are (sometimes configurable) sets of Babel plugins so that you don't have to manage yourself which plugins you need. The one you should definitely use is
@babel/preset-env. You have already installed it. Under the
presets key in the config you list every preset your library is going to use along with any preset configuration options.
In the example config above there are three presets:
envis the mentioned standard one.
typescriptis obviously only needed to compile files that contain TypeScript syntax. As already mentioned it works by throwing away anything that isn't JavaScript. It does not interpret or even check TypeScript. And that's a Good Thing. We will talk about that point later. If your library is not written in TypeScript, you don't need this preset. But if you need it, you have to install it of course:
npm install -D @babel/preset-typescript.
reactis clearly only needed in React projects. It brings plugins for JSX syntax and transforming. If you need it, install it with:
npm i -D @babel/preset-react. Note: With the config option
pragma(and probably
pragmaFrag) you can transpile JSX to other functions than
React.createElement. See documentation.
Let us look at the
env preset again. Notable is the
modules: false option for
preset-env. The effect is this: As per default Babel transpiles ESModules (
import /
export) to CommonJS modules (
require() /
module.export(s)). With
modules set to
false Babel will output the transpiled files with their ESModule syntax untouched. The rest of the code will be transformed, just the module related statements stay the same. This has (at least) two benefits:
First, this is a library. If you publish it as separate files, users of your library can import exactly the modules they need. And if they use a bundler that has the ability to treeshake (that is: to remove unused modules on bundling), they will end up with only the code bits they need from your library. With CommonJS modules that would not be possible and they would have your whole library in their bundle.
Furthermore, if you are going to provide your library as a bundle (for example a UMD bundle that one can use via unpkg.com), you can make use of treeshaking and shrink your bundle as much as possible.
There is another, suspiciously absent option for
preset-env and that is the
targets option. If you omit it, Babel will transpile your code down to ES5. That is most likely not what you want—unless you live in the dark, medieval times of JavaScript (or you know someone who uses IE). Why transpiling something (and generating much more code) if the runtime environment can handle your modern code? What you could do is to provide said
targets key and give it a Browserslist compatible query (see Babel documentation). For example something like
"last 2 versions" or even
"defaults". In that case Babel would use the browserslist tool to find out which features it has to transpile to be able to run in the environments given with
targets.
But we will use another place to put this configuration than the
babel.config.js file. You see, Babel is not the only tool that can make use of browserslist. But any tool, including Babel, will find the configuration if it's in the right place. The documentation of browserslist recommends to put it inside
package.json so we will do that. Add something like the following to your library's
package.json:
"browserslist": [ "last 2 Chrome versions", "last 2 Firefox versions", "last 2 Edge versions", "last 2 Opera versions", "last 2 FirefoxAndroid versions", "last 2 iOS version", "last 2 safari version" ]
I will admit this query is a bit opinionated, maybe not even good for you. You can of course roll your own, or if you are unsure, just go with this one:
"browserslist": "defaults" // alias for "> 0.5%, last 2 versions, Firefox ESR, not dead"; contains ie 11
The reason I propose the query array above is that I want to get an optimized build for modern browsers.
"defaults",
"last 2 versions" (without specific browser names) and the like will include things like Internet Explorer 11 and Samsung Internet 4. These ancient browsers do not support so much even of ES2015. We would end up with a much much bigger deliverable than modern browsers would need. But there is something you can do about it. You can deliver modern code to modern browsers and still support The Ancients™. We will go into further details in a future section but as a little cliffhanger: browserslist supports multiple configurations. For now we will target only modern browsers.
plugins
The Babel configuration above defines one extra plugin:
plugin-transform-runtime. The main reason to use this is deduplication of helper code. When Babel transpiles your modules, it injects little (or not so little) helper functions. The problem is that it does so in every file where they are needed. The
transform-runtime plugin replaces all those injected functions with
require statements to the
@babel/runtime package. That means in the final application there has to be this runtime package.
To make that happen you could just add
@babel/runtime to the prod dependencies of your library (
npm i @babel/runtime). That would definitely work. But here we will add it to the
peerDependencies in
package.json. That way the user of your library has to install it themselves but on the other hand, they have more control over the version (and you don't have to update the dependency too often). And maybe they have it installed already anyway. So we just push it out of our way and just make sure that it is there when needed.
Back to the Babel plugin. To use that plugin you have to install it:
npm i -D @babel/plugin-transform-runtime. Now you're good to go.
Before we go on to the
env key, this is the right place to talk about polyfills and how to use them with Babel.
How to use polyfills in the best way possible
It took me a few hours reading and understanding the problem, the current solutions and their weaknesses. If you want to read it up yourself, start at Babel polyfill, go on with Babel transform-runtime and then read core-js@3, babel and a look into the future.
But because I already did you don't have to if you don't want to. Ok, let's start with the fact that there two standard ways to get polyfills into your code. Wait, one step back: Why polyfills?
If you already know, skip to Import core-js. When Babel transpiles your code according to the target environment that you specified, it just changes syntax. Code that the target (the browser) does not understand is changed to (probably longer and more complicated) code that does the same and is understood. But there are things beyond syntax that are possibly not supported: features. Like for example Promises. Or certain features of other builtin types like
Object.is or
Array.from or whole new types like
Map or
Set. Therefore we need polyfills that recreate those features for targets that do not support them natively.
Also note that we are talking here only about polyfills for ES-features or some closely related Web Platform features (see the full list here). There are browser features like for instance the global
fetch function that need separate polyfills.
Import core-js
Ok, so there is a Babel package called
@babel/polyfill that you can import at the entry point of your application and it adds all needed polyfills from a library called
core-js as well as a separate runtime needed for
async/await and generator functions. But since Babel 7.4.0 this wrapper package is deprecated. Instead you should install and import two separate packages:
core-js/stable and
regenerator-runtime/runtime.
Then, we can get a nice effect from our
env preset from above. We change the configuration to this:
[ '@babel/env', { modules: false, corejs: 3, useBuiltIns: 'usage' } ],
This will transform our code so that the import of the whole
core-js gets removed and instead Babel injects specific polyfills in each file where they are needed. And only those polyfills that are needed in the target environment which we have defined via
browserslist. So we end up with the bare minimum of additional code.
Two additional notes here: (1) You have to explicitly set
corejs to
3. If the key is absent, Babel will use version 2 of
corejs and you don't want that. Much has changed for the better in version 3, especially feature-wise. But also bugs have been fixed and the package size is dramatically smaller. If you want, read it all up here (overview) and here (changelog for version 3.0.0).
And (2), there is another possible value for
useBuiltIns and that is
entry. This variant will not figure out which features your code actually needs. Instead, it will just add all polyfills that exist for the given target environment. It works by looking for
corejs imports in your source (like
import corejs/stable) which should only appear once in your codebase, probably in your entry module. Then, it replaces this "meta" import with all of the specific imports of polyfills that match your targets. This approach will likely result in a much, much larger package with much of unneeded code. So we just use
usage. (With
corejs@2 there were a few problems with
usage that could lead to wrong assumptions about which polyfills you need. So in some cases
entry was the more safe option. But these problems are appearently fixed with version 3.)
Tell transform-runtime to import core-js
The second way to get the polyfills that your code needs is via the
transform-runtime plugin from above. You can configure it to inject not only imports for the Babel helpers but also for the
core-js modules that your code needs:
plugins: [ [ '@babel/plugin-transform-runtime', { corejs: 3 } ] ],
This tells the plugin to insert import statements to corejs version 3. The reason for this version I have mentioned above.
If you configure the plugin to use
core-js, you have to change the runtime dependency: The
peerDependencies should now contain not
@babel/runtime but
@babel/runtime-corejs3!
Which way should you use?
In general, the combination of manual import and the
env preset is meant for applications and the way with
transform-runtime is meant for libraries. One reason for this is that the first way of using
core-js imports polyfills that "pollute" the global namespace. And if your library defines a global
Promise, it could interfere with other helper libraries used by your library's users. The imports that are injected by
transform-runtime are contained. They import from
core-js-pure which does not set globals.
On the other hand, using the transform plugin does not account for the environment you are targeting. Probably in the future it could also use the same heuristics as
preset-env but at the moment it just adds every polyfill that is theoretically needed by your code. Even if the target browsers would not need them or not all of them. For the development in that direction see the comment from the corejs maintainer and this RFC issue at Babel.
So it looks like you have to choose between a package that adds as few code as possible and one that plays nicely with unknown applications around it. I have played around with the different options a bit and bundled the resulting files with webpack and this is my result:
You get the smallest bundle with the
core-js globals from
preset-env. But it's too dangerous for a library to mess with the global namespace of its users. Besides that, in the (hopefully very near) future the transform-runtime plugin will also use the browserslist target environments. So the size issue is going to go away.
The
env key
With
env you can add configuration options for specific build environments. When Babel executes it will look for
process.env.BABEL_ENV. If that's not set, it will look up
process.env.NODE_ENV and if that's not found, it will fallback to the string
'development'. After doing this lookup it will check if the config has an
env object and if there is a key in that object that matches the previously found env string. If there is such a match, Babel applies the configuration under that env name.
We use it for example for our test runner Jest. Because Jest can not use ESModules we need a Babel config that transpiles our modules to CommonJS modules. So we just add an alternative configuration for
preset-env under the env name
'test'. When Jest runs (We will use
babel-jest for this. See in a later part of this series.) it sets
process.env.NODE_ENV to
'test'. And so everything will work.
Conclusion and final notes for Babel setup
Install all needed packages:
npm i -D @babel/core @babel/cli @babel/preset-env @babel/plugin-transform-runtime
Add a peerDependency to your
package.json that your users should install themselves:
... "peerDependencies": { "@babel/runtime-corejs3": "^7.4.5", // at least version 7.4; your users have to provide it } ...
Create a
babel.config.js that contains at least this:
// babel.config.js module.exports = { presets: [ [ '@babel/env', // transpile for targets { modules: false, // don't transpile module syntax } ], ], plugins: [ [ '@babel/plugin-transform-runtime', // replace helper code with runtime imports (deduplication) { corejs: 3 } // import corejs polyfills exactly where they are needed ] ], env: { test: { // extra configuration for process.env.NODE_ENV === 'test' presets: ['@babel/env'] // overwrite env-config from above with transpiled module syntax } } };
If you write TypeScript, run
npm i -D @babel/preset-typescript and add
'@babel/preset-typescript' to the
presets.
If you write React code, (JSX) run
npm i -D @babel/preset-react and add
'@babel/preset-react' to the
presets.
Add a
browserslist section in your package.json:
... "browserslist": [ "last 2 Chrome versions", "last 2 Firefox versions", "last 2 Edge versions", "last 2 Opera versions", "last 2 FirefoxAndroid versions", "last 2 iOS version", "last 2 safari version" ] ...
In case of using another browserslist query that includes targets that do not have support for generator functions and/or async/await, there is something you have to tell your users:
Babel's transform-runtime plugin will import
regenerator-runtime. This library depends on a globally available Promise constructor. But Babel will not include a promise polyfill for regenerator-runtime. Probably because it adds polyfills only for things genuinely belonging to your code, not external library code. That means, if your usecase meets these conditions, you should mention it in your README or installation instructions that the users of your lib have to make sure there is a Promise available in their application.
And that is it for the Babel setup.
Next up: Compiling with the TypeScript compiler
Many thanks to my friend Tim Kraut for proof-reading this article!
Discussion (1)
Thanks for the series! Looking forward to the next post!
|
https://dev.to/4nduril/transpile-modern-language-features-with-babel-4fcp
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
@Generated(value="OracleSDKGenerator", comments="API Version: 20200601") public final class LogAnalyticsObjectCollectionRule extends Object
The configuration details of an Object Storage based collection rule.
LogAnalyticsObjectCollectionRule.Builder. This model distinguishes fields that are
nullbecause they are unset from fields that are explicitly set to
null. This is done in the setter methods of the
LogAnalyticsObjectCollection","osNamespace","osBucketName","collectionType","pollSince","pollTill","logGroupId","logSourceName","entityId","charEncoding","overrides","lifecycleState","lifecycleDetails","timeCreated","timeUpdated","isEnabled","objectNameFilters","definedTags","freeformTags"}) @Deprecated public LogAnalyticsObjectCollectionRule(String id, String name, String description, String compartmentId, String osNamespace, String osBucketName, ObjectCollectionRuleCollectionTypes collectionType, String pollSince, String pollTill, String logGroupId, String logSourceName, String entityId, String charEncoding, Map<String,List<PropertyOverride>> overrides, ObjectCollectionRuleLifecycleStates lifecycleState, String lifecycleDetails, Date timeCreated, Date timeUpdated, Boolean isEnabled, List<String> objectNameFilters, Map<String,Map<String,Object>> definedTags, Map<String,String> freeformTags)
public static LogAnalyticsObjectCollectionRule.Builder builder()
Create a new builder.
public LogAnalyticsObjectCollectionRule.Builder toBuilder()
public String getId()
public String getName()
A unique name to the rule. The name must be unique, within the tenancy, and cannot be changed.
public String getDescription()
A string that describes the details of the rule. It does not have to be unique, and can be changed. Avoid entering confidential information.
public String getCompartmentId()
public String getOsNamespace()
Object Storage namespace.
public String getOsBucketName()
Name of the Object Storage bucket.
public ObjectCollectionRuleCollectionTypes getCollectionType()
The type of log collection.
public String getPollSince()
The oldest time of the file in the bucket to consider for collection. Accepted values are: BEGINNING or CURRENT_TIME or RFC3339 formatted datetime string. Use this for HISTORIC or HISTORIC_LIVE collection types. When collectionType is LIVE, specifying pollSince value other than CURRENT_TIME will result in error.
public String getPollTill()
The newest time of the file in the bucket to consider for collection. Accepted values are: CURRENT_TIME or RFC3339 formatted datetime string. Use this for HISTORIC collection type. When collectionType is LIVE or HISTORIC_LIVE, specifying pollTill will result in error.
public String getLogGroupId()
Logging Analytics Log group OCID to associate the processed logs with.
public String getLogSourceName()
Name of the Logging Analytics Source to use for the processing.
public String getEntityId()
Logging Analytics entity OCID to associate the processed logs with. ObjectCollectionRuleLifecycleStates getLifecycleState()
The current state of the rule.
public String getLifecycleDetails()
A detailed status of the life cycle state.
public Date getTimeCreated()
The time when this rule was created. An RFC3339 formatted datetime string.
public Date getTimeUpdated()
The time when this rule was last updated. An RFC3339 formatted datetime string.
public Boolean getIsEnabled()
Whether or not this rule is currently enabled.
public List<String> getObjectNameFilters()
When the filters are provided, only the objects matching the filters are picked up for processing. The matchType supported is exact match and accommodates wildcard “*”. For more information on filters, see Event Filters.
|
https://docs.oracle.com/en-us/iaas/tools/java/2.14.0/com/oracle/bmc/loganalytics/model/LogAnalyticsObjectCollectionRule.html
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
WSL+Docker: Kubernetes on the Windows Desktop platform for running containerized services and applications in distributed environments. While a wide variety of distributions and installers exist to deploy Kubernetes in the cloud environments (public, private or hybrid), or within the bare metal environments, there is still a need to deploy and run Kubernetes locally, for example, on the developer's workstation.
Kubernetes has been originally designed to be deployed and used in the Linux environments. However, a good number of users (and not only application developers) use Windows OS as their daily driver. When Microsoft revealed WSL - the Windows Subsystem for Linux, the line between Windows and Linux environments became even less visible.
Also, WSL brought an ability to run Kubernetes on Windows almost seamlessly!
Below, we will cover in brief how to install and use various solutions to run Kubernetes locally.
Prerequisites
Since we will explain how to install KinD, we won't go into too much detail around the installation of KinD's dependencies.
However, here is the list of the prerequisites needed and their version/lane:
- OS: Windows 10 version 2004, Build 19041
- WSL2 enabled
- In order to install the distros as WSL2 by default, once WSL2 installed, run the command
wsl.exe --set-default-version 2in Powershell
- WSL2 distro installed from the Windows Store - the distro used is Ubuntu-18.04
- Docker Desktop for Windows, stable channel - the version used is 2.2.0.4
- [Optional] Microsoft Terminal installed from the Windows Store
- Open the Windows store and type "Terminal" in the search, it will be (normally) the first option
And that's actually it. For Docker Desktop for Windows, no need to configure anything yet as we will explain it in the next section.
WSL2: First contact
Once everything is installed, we can launch the WSL2 terminal from the Start menu, and type "Ubuntu" for searching the applications and documents:
Once found, click on the name and it will launch the default Windows console with the Ubuntu bash shell running.
Like for any normal Linux distro, you need to create a user and set a password:
[Optional] Update the
sudoers
As we are working, normally, on our local computer, it might be nice to update the
sudoers and set the group
%sudo to be password-less:
# Edit the sudoers with the visudo command sudo visudo # Change the %sudo group to be password-less %sudo ALL=(ALL:ALL) NOPASSWD: ALL # Press CTRL+X to exit # Press Y to save # Press Enter to confirm
Update Ubuntu
Before we move to the Docker Desktop settings, let's update our system and ensure we start in the best conditions:
# Update the repositories and list of the packages available sudo apt update # Update the system based on the packages installed > the "-y" will approve the change automatically sudo apt upgrade -y
Docker Desktop: faster with WSL2
Before we move into the settings, let's do a small test, it will display really how cool the new integration with Docker Desktop is:
# Try to see if the docker cli and daemon are installed docker version # Same for kubectl kubectl version
You got an error? Perfect! It's actually good news, so let's now move on to the settings.
Docker Desktop settings: enable WSL2 integration
First let's start Docker Desktop for Windows if it's not still the case. Open the Windows start menu and type "docker", click on the name to start the application:
You should now see the Docker icon with the other taskbar icons near the clock:
Now click on the Docker icon and choose settings. A new window will appear:
By default, the WSL2 integration is not active, so click the "Enable the experimental WSL 2 based engine" and click "Apply & Restart":
What this feature did behind the scenes was to create two new distros in WSL2, containing and running all the needed backend sockets, daemons and also the CLI tools (read: docker and kubectl command).
Still, this first setting is still not enough to run the commands inside our distro. If we try, we will have the same error as before.
In order to fix it, and finally be able to use the commands, we need to tell the Docker Desktop to "attach" itself to our distro also:
Let's now switch back to our WSL2 terminal and see if we can (finally) launch the commands:
# Try to see if the docker cli and daemon are installed docker version # Same for kubectl kubectl version
Tip: if nothing happens, restart Docker Desktop and restart the WSL process in Powershell:
Restart-Service LxssManagerand launch a new Ubuntu session
And success! The basic settings are now done and we move to the installation of KinD.
KinD: Kubernetes made easy in a container KinD and create our first cluster.
And as sources are always important to mention, we will follow (partially) the how-to on the official KinD website:
# Download the latest version of KinD curl -Lo ./kind # Make the binary executable chmod +x ./kind # Move the binary to your executable path sudo mv ./kind /usr/local/bin/
KinD: the first cluster
We are ready to create our first cluster:
# Check if the KUBECONFIG is not set echo $KUBECONFIG # Check if the .kube directory is created > if not, no need to create it ls $HOME/.kube # Create the cluster and give it a name (optional) kind create cluster --name wslkind # Check if the .kube has been created and populated with files ls $HOME/.kube
Tip: as you can see, the Terminal was changed so the nice icons are all displayed
The cluster has been successfully created, and because we are using Docker Desktop, the network is all set for us to use "as is".
So we can open the
Kubernetes master URL in our Windows browser:
And this is the real strength from Docker Desktop for Windows with the WSL2 backend. Docker really did an amazing integration.
KinD: counting 1 - 2 - 3
Our first cluster was created and it's the "normal" one node cluster:
# Check how many nodes it created kubectl get nodes # Check the services for the whole cluster kubectl get all --all-namespaces
While this will be enough for most people, let's leverage one of the coolest feature, multi-node clustering:
# Delete the existing cluster kind delete cluster --name wslkind # Create a config file for a 3 nodes cluster cat << EOF > kind-3nodes.yaml kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane - role: worker - role: worker EOF # Create a new cluster with the config file kind create cluster --name wslkindmultinodes --config ./kind-3nodes.yaml # Check how many nodes it created kubectl get nodes
Tip: depending on how fast we run the "get nodes" command, it can be that not all the nodes are ready, wait few seconds and run it again, everything should be ready
And that's it, we have created a three-node cluster, and if we look at the services one more time, we will see several that have now three replicas:
# Check the services for the whole cluster kubectl get all --all-namespaces
KinD: can I see a nice dashboard?
Working on the command line is always good and very insightful. However, when dealing with Kubernetes we might want, at some point, to have a visual overview.
For that, the Kubernetes Dashboard project has been created. The installation and first connection test is quite fast, so let's do it:
# Install the Dashboard application into our cluster kubectl apply -f # Check the resources it created based on the new namespace created kubectl get all -n kubernetes-dashboard
As it created a service with a ClusterIP (read: internal network address), we cannot reach it if we type the URL in our Windows browser:
That's because we need to create a temporary proxy:
# Start a kubectl proxy kubectl proxy # Enter the URL on your browser:
Finally to login, we can either enter a Token, which we didn't create, or enter the
kubeconfig file from our Cluster.
If we try to login with the
kubeconfig, we will get the error "Internal error (500): Not enough data to create auth info structure". This is due to the lack of credentials in the
kubeconfig file.
So to avoid you ending with the same error, let's follow the recommended RBAC approach.
Let's open a new WSL2 session:
# Create a new ServiceAccount kubectl apply -f - <<EOF apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kubernetes-dashboard EOF # Create a ClusterRoleBinding for the ServiceAccount kubectl apply -f - <<EOF EOF
# Get the Token for the ServiceAccount kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}') # Copy the token and copy it into the Dashboard login and press "Sign in"
Success! And let's see our nodes listed also:
A nice and shiny three nodes appear.
Minikube: Kubernetes from everywhere Minikube and create our first cluster.
And as sources are always important to mention, we will follow (partially) the how-to from the Kubernetes.io website:
# Download the latest version of Minikube curl -Lo minikube # Make the binary executable chmod +x ./minikube # Move the binary to your executable path sudo mv ./minikube /usr/local/bin/
Minikube: updating the host
If we follow the how-to, it states that we should use the
--driver=none flag in order to run Minikube directly on the host and Docker.
Unfortunately, we will get an error about "conntrack" being required to run Kubernetes v 1.18:
# Create a minikube one node cluster minikube start --driver=none
Tip: as you can see, the Terminal was changed so the nice icons are all displayed
So let's fix the issue by installing the missing package:
# Install the conntrack package sudo apt install -y conntrack
Let's try to launch it again:
# Create a minikube one node cluster minikube start --driver=none # We got a permissions error > try again with sudo sudo minikube start --driver=none
Ok, this error cloud be problematic ... in the past. Luckily for us, there's a solution
Minikube: enabling SystemD
In order to enable SystemD on WSL2, we will apply the scripts from Daniel Llewellyn.
I invite you to read the full blog post and how he came to the solution, and the various iterations he did to fix several issues.
So in a nutshell, here are the commands:
# Install the needed packages sudo apt install -yqq daemonize dbus-user-session fontconfig
# Create the start-systemd-namespace script sudo vi /usr/sbin/start-systemd-namespace #!/bin/bash SYSTEMD_PID=$(ps -ef | grep '/lib/systemd/systemd --system-unit=basic.target$' | grep -v unshare | awk '{print $2}') if [ -z "$SYSTEMD_PID" ] || [ "$SYSTEMD_PID" != "1" ]; then export> "$HOME/.systemd-env" exec sudo /usr/sbin/enter-systemd-namespace "$BASH_EXECUTION_STRING" fi if [ -n "$PRE_NAMESPACE_PATH" ]; then export PATH="$PRE_NAMESPACE_PATH" fi
# Create the enter-systemd-namespace sudo vi /usr/sbin/enter-systemd-namespace #!/bin/bash if [ "$UID" != 0 ]; then echo "You need to run $0 through sudo" exit 1 fi SYSTEMD_PID="$(ps -ef | grep '/lib/systemd/systemd --system-unit=basic.target$' | grep -v unshare | awk '{print $2}')" if [ -z "$SYSTEMD_PID" ]; then /usr/sbin/daemonize /usr/bin/unshare --fork --pid --mount-proc /lib/systemd/systemd --system-unit=basic.target while [ -z "$SYSTEMD_PID" ]; do SYSTEMD_PID="$(ps -ef | grep '/lib/systemd/systemd --system-unit=basic.target$' | grep -v unshare | awk '{print $2}')" done fi if [ -n "$SYSTEMD_PID" ] && [ "$SYSTEMD_PID" != "1" ]; then if [ -n "$1" ] && [ "$1" != "bash --login" ] && [ "$1" != "/bin/bash --login" ]; then exec /usr/bin/nsenter -t "$SYSTEMD_PID" -a \ /usr/bin/sudo -H -u "$SUDO_USER" \ /bin/bash -c 'set -a; source "$HOME/.systemd-env"; set +a; exec bash -c '"$(printf "%q" "$@")" else exec /usr/bin/nsenter -t "$SYSTEMD_PID" -a \ /bin/login -p -f "$SUDO_USER" \ $(/bin/cat "$HOME/.systemd-env" | grep -v "^PATH=") fi echo "Existential crisis" fi
# Edit the permissions of the enter-systemd-namespace script sudo chmod +x /usr/sbin/enter-systemd-namespace # Edit the bash.bashrc file sudo sed -i 2a"# Start or enter a PID namespace in WSL2\nsource /usr/sbin/start-systemd-namespace\n" /etc/bash.bashrc
Finally, exit and launch a new session. You do not need to stop WSL2, a new session is enough:
Minikube: the first cluster
We are ready to create our first cluster:
# Check if the KUBECONFIG is not set echo $KUBECONFIG # Check if the .kube directory is created > if not, no need to create it ls $HOME/.kube # Check if the .minikube directory is created > if yes, delete it ls $HOME/.minikube # Create the cluster with sudo sudo minikube start --driver=none
In order to be able to use
kubectl with our user, and not
sudo, Minikube recommends running the
chown command:
# Change the owner of the .kube and .minikube directories sudo chown -R $USER $HOME/.kube $HOME/.minikube # Check the access and if the cluster is running kubectl cluster-info # Check the resources created kubectl get all --all-namespaces
The cluster has been successfully created, and Minikube used the WSL2 IP, which is great for several reasons, and one of them is that we can open the
Kubernetes master URL in our Windows browser:
And the real strength of WSL2 integration, the port
8443 once open on WSL2 distro, it actually forwards it to Windows, so instead of the need to remind the IP address, we can also reach the
Kubernetes master URL via
localhost:
Minikube: can I see a nice dashboard?
Working on the command line is always good and very insightful. However, when dealing with Kubernetes we might want, at some point, to have a visual overview.
For that, Minikube embeded the Kubernetes Dashboard. Thanks to it, running and accessing the Dashboard is very simple:
# Enable the Dashboard service sudo minikube dashboard # Access the Dashboard from a browser on Windows side
The command creates also a proxy, which means that once we end the command, by pressing
CTRL+C, the Dashboard will no more be accessible.
Still, if we look at the namespace
kubernetes-dashboard, we will see that the service is still created:
# Get all the services from the dashboard namespace kubectl get all --namespace kubernetes-dashboard
Let's edit the service and change it's type to
LoadBalancer:
# Edit the Dashoard service kubectl edit service/kubernetes-dashboard --namespace kubernetes-dashboard # Go to the very end and remove the last 2 lines status: loadBalancer: {} # Change the type from ClusterIO to LoadBalancer type: LoadBalancer # Save the file
Check again the Dashboard service and let's access the Dashboard via the LoadBalancer:
# Get all the services from the dashboard namespace kubectl get all --namespace kubernetes-dashboard # Access the Dashboard from a browser on Windows side with the URL: localhost:<port exposed>
Conclusion
It's clear that we are far from done as we could have some LoadBalancing implemented and/or other services (storage, ingress, registry, etc...).
Concerning Minikube on WSL2, as it needed to enable SystemD, we can consider it as an intermediate level to be implemented.
So with two solutions, what could be the "best for you"? Both bring their own advantages and inconveniences, so here an overview from our point of view solely:
We hope you could have a real taste of the integration between the different components: WSL2 - Docker Desktop - KinD/Minikube. And that gave you some ideas or, even better, some answers to your Kubernetes workflows with KinD and/or Minikube on Windows and WSL2.
See you soon for other adventures in the Kubernetes ocean.
|
https://kubernetes.io/blog/2020/05/21/wsl-docker-kubernetes-on-the-windows-desktop/
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
public static void main( String arg[] )
In the above statement, can I use an
int array in place of the
String array? What happens if I don’t put anything in the parenthesis, i.e if I use an empty parenthesis?
When you compile the code with the changes that you mentioned, it will compile successfully. When you try to run, JVM checks for the main method with String array as arguments. Since there is no main method with String array as argument, your code will not execute successfully and it throws NoSuchMethodError.
###
No, I think you can’t use int array instead of String array.because the argument
int is used by the operating system to pass an integer value specifying the number of command-line arguments entered by the user. so you must follow the following pattern.
public static void main(String[] args) public static void main(String args[])
###
I’m not sure what you mean by
Int array but no, you cannot. The method signature needs to exactly match
public static void main(String[] args). The only thing you can change is the name of the argument. The name of the method, the argument type, the visibility (public vs private, etc) is what the runtime uses to find the method itself. If it does not conform to that signature, it is not an entry point method and consequently will not be called when your application starts up.
However, it should be noted that what you’re suggesting will compile without issue. The problems will not arise until you attempt to run the application.
###
The code will compile but not run.
The reason for the string[] is so that people can pass parameters through the command line.
###
The compiler will accept but the runtime will give you a NoSuchMethodError
###
If you’re looking to get numbers instead of strings, use this approach
public static void main(String[] args) { int[] numbers = new int[args.length]; for(int i = 0; i < args.length; i++) numbers[i] = Integer.parseInt(args[i]); }
Handling exceptions is exercise for the reader.
Other answers explain quite well your other issues of the ‘why’.
###
The main method is the entry point for the java application. If you look at the java language specification, it specifically states:
The method main must be declared
public, static, and void. It must
accept a single argument that is an
array of strings. This method can be
declared as either
public static void main(String[] args)
or
public static void main(String... args)
The String[] array passed in contains any command line arguments that have been passed in by whatever launched the application. You can then read them and convert them as you need.
###
No,it produces a runtime error like
Main method not found in class test, please define the main method as:
public static void main(String[] args)
JAVA is more specific about main method signature, It considers your method as a general method in a class.
###
If you want to convert String args from main static method to int then do the following:
public class StringToIntArgs { public static void main(String[] args) { // TODO Auto-generated method stub int[] anIntArray = new int[args.length]; for(int i = 0; i < args.length; ++i) { anIntArray[i] = Integer.valueOf(args[i]); System.out.println(((Object)anIntArray[i]).getClass().getName()); } } }
Now compile and run the java code with arguments like below:
C:\Users\smith\Documents\programming\java>javac StringToIntArgs.java C:\Users\smith\Documents\programming\java>java StringToIntArgs 1 2 3 4 5 java.lang.Integer java.lang.Integer java.lang.Integer java.lang.Integer java.lang.Integer
|
https://exceptionshub.com/java-basic-question.html
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
Deploying OpenShift Container Storage using bare metal infrastructure
How to install and set up your bare metal environment
Abstract
Preface
Red Hat OpenShift Container Storage 4.5 supports deployment on existing Red Hat OpenShift Container Platform (OCP) bare metal clusters in connected or disconnected environments along with out-of-the-box support for proxy environments.
Both internal and external Openshift Container Storage clusters are supported on bare metal. See Planning your deployment for more information about deployment requirements.
To deploy OpenShift Container Storage, follow the appropriate deployment process for your environment:
Internal mode
- External mode
Chapter 1. Deploying using local storage devices
Deploying OpenShift Container Storage on OpenShift Container Platform using local storage devices provides you with the option to create internal cluster resources. This will result in the internal provisioning of the base services, which helps to make additional storage classes available to applications.
Use this section to deploy OpenShift Container Storage on bare metal infrastructure where OpenShift Container Platform is already installed.
To deploy Red Hat OpenShift Container Storage using local storage, follow these steps:
- Understand the requirements for installing OpenShift Container Storage using local storage devices.
For Red Hat Enterprise Linux based hosts, enabling file system access for containers on Red Hat Enterprise Linux based nodes.Note
Skip this step for Red Hat Enterprise Linux CoreOS (RHCOS).
- Install the Red Hat OpenShift Container Storage Operator.
- Install Local Storage Operator.
- Find the available storage devices.
- Creating OpenShift Container Storage cluster service on bare metal.
1.1. Requirements for installing OpenShift Container Storage using local storage devices
You must have at least three OpenShift Container Platform worker nodes in the cluster with locally attached storage devices on each of them.
- Each of the three selected nodes must have at least one raw block device available to be used by OpenShift Container Storage.
- For minimum starting node requirements, see Resource requirements section in Planning guide.
- The devices to be used must be empty, that is, there should be no PVs, VGs, or LVs remaining on the disks.
You must have a minimum of three labeled nodes.
- It is recommended that the worker nodes are spread across three different physical nodes, racks or failure domains for high availability.
Each node that has local storage devices to be used by OpenShift Container Storage must have a specific label to deploy OpenShift Container Storage pods. To label the nodes, use the following command:
$ oc label nodes <NodeNames> cluster.ocs.openshift.io/openshift-storage=''
- There should not be any storage providers managing locally mounted storage on the storage nodes that would conflict with the use of Local Storage Operator for Red Hat OpenShift Container Storage.
- The Local Storage Operator version must match the Red Hat OpenShift Container Platform version in order to have the Local Storage Operator fully supported with Red Hat OpenShift Container Storage. The Local Storage Operator does not get upgraded when Red Hat OpenShift Container Platform is upgraded.
1..3..4. Installing Local Storage Operator
Use this procedure to install the Local Storage Operator from the Operator Hub before creating OpenShift Container Storage clusters on local storage devices.
Prerequisites
Create a namespace called
local-storageas follows:
- Click Administration → Namespaces in the left pane of the OpenShift Web Console.
- Click Create Namespace.
- In the Create Namespace dialog box, enter
local-storagefor Name.
- Select No restrictions option for Default Network Policy.
- Click Create.
Procedure
- Click Operators → OperatorHub in the left pane of the OpenShift Web Console.
- Search for Local Storage Operator from the list of operators and click on it.
Click Install.
Figure 1.2. Install Operator page
On the Install Operator page, ensure the following options are selected
- Update Channel as stable-4.5
- Installation Mode as A specific namespace on the cluster
- Installed Namespace as local-storage.
- Approval Strategy as Automatic
- Click Install.
- Verify that the Local Storage Operator shows the Status as
Succeeded.
1.5. Finding available storage devices
Use this procedure to identify the device names for each of the three or more worker nodes that you have labeled with the OpenShift Container Storage label
cluster.ocs.openshift.io/openshift-storage='' before creating PVs for bare metal.
Procedure
List and verify the name of the worker nodes with the OpenShift Container Storage label.
$ oc get nodes -l cluster.ocs.openshift.io/openshift-storage=
Example output:
NAME STATUS ROLES AGE VERSION bmworker01 Ready worker 6h45m v1.16.2 bmworker02 Ready worker 6h45m v1.16.2 bmworker03 Ready worker 6h45m v1.16.2
Log in to each worker node that is used for OpenShift Container Storage resources and find the unique
by-iddevice name for each available raw block device.
$ oc debug node/<Nodename>
Example output:
$ oc debug node/bmworker01 Starting pod/bmworker01-debug ... To use host binaries, run `chroot /host` Pod IP: 10.0.135.71 If you don't see a command prompt, try pressing enter. sh-4.2# chroot /host sh-4.4# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvda 202:0 0 120G 0 disk |-xvda1 202:1 0 384M 0 part /boot |-xvda2 202:2 0 127M 0 part /boot/efi |-xvda3 202:3 0 1M 0 part `-xvda4 202:4 0 119.5G 0 part `-coreos-luks-root-nocrypt 253:0 0 119.5G 0 dm /sysroot nvme0n1 259:0 0 931G 0 disk
In this example, for
bmworker01, the available local device is
nvme0n1.
Identify the unique ID for each of the devices selected in Step 2.
sh-4.4# ls -l /dev/disk/by-id/ | grep nvme0n1 lrwxrwxrwx. 1 root root 13 Mar 17 16:24 nvme-INTEL_SSDPE2KX010T7_PHLF733402LM1P0GGN -> ../../nvme0n1
In the above example, the ID for the local device
nvme0n1
nvme-INTEL_SSDPE2KX010T7_PHLF733402LM1P0GGN
- Repeat the above step to identify the device ID for all the other nodes that have the storage devices to be used by OpenShift Container Storage. See this Knowledge Base article for more details.
1.6. Creating OpenShift Container Storage cluster on bare metal
Prerequisites
- Ensure that all the requirements in the Requirements for installing OpenShift Container Storage using local storage devices section are met.
- You must have three worker nodes with the same storage type and size attached to each node (for example, 2TB NVMe hard drive) to use local storage devices on bare metal.
Verify your OpenShift Container Platform worker nodes are labeled for OpenShift Container Storage:
$ oc get nodes -l cluster.ocs.openshift.io/openshift-storage -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}'
To identify storage devices on each node, refer to Finding available storage devices.
Procedure
Create the
LocalVolumeCR for block PVs.
Example of
LocalVolumeCR
local-storage-block.yamlusing OCS label as node selector./nvme-INTEL_SSDPEKKA128G7_BTPY81260978128A # <-- modify this line - /dev/disk/by-id/nvme-INTEL_SSDPEKKA128G7_BTPY80440W5U128A # <-- modify this line - /dev/disk/by-id/nvme-INTEL_SSDPEKKA128G7_BTPYB85AABDE128A # <-- modify this line
Create the
LocalVolumeCR for block PVs.
$ oc create -f local-storage-block.yaml
Check if the pods are created.
Example output:
NAME READY local-block-local-diskmaker-cmfql 1/1 local-block-local-diskmaker-g6fzr 1/1 local-block-local-diskmaker-jkqxt 1/1 local-block-local-provisioner-jgqcc 1/1 local-block-local-provisioner-mx49d 1/1 local-block-local-provisioner-qbcvp 1/1 local-storage-operator-54bc7566c6-ddbrt 1/1 STATUS RESTARTS AGE Running 0 31s Running 0 31s Running 0 31s Running 0 31s Running 0 31s Running 0 31s Running 0 12m
Check if the PVs are created.
$ oc get pv
Example output:
NAME CAPACITY ACCESS MODES RECLAIM POLICY local-pv-150fdc87 931Gi RWO Delete local-pv-183bfc0a 931Gi RWO Delete local-pv-b2f5cb25 931Gi RWO Delete STATUS CLAIM STORAGECLASS REASON AGE Available localblock 2m11s Available localblock 2m15s Available localblock 2m21s
Check for the new
localblock
StorageClass.
$ oc get sc|egrep -e "localblock|NAME"
Example output:
NAME PROVISIONER RECLAIMPOLICY localblock kubernetes.io/no-provisioner Delete VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE WaitForFirstConsumer false 4d23h
Create the OpenShift Container Storage Cluster Service that uses the
localblockStorage Class.
- Log into the OpenShift Web Console.
- Click Operators → Installed Operators from the OpenShift Web Console to view the installed operators. Ensure that the Project selected is openshift-storage.
On the Installed Operators page, click Openshift Container Storage.
Figure 1.3. OpenShift Container Storage Operator page
On the Installed Operators → Operator Details page, perform either of the following to create a Storage Cluster Service.
On the Details tab → Provided APIs → OCS Storage Cluster, click Create Instance.
Figure 1.4. Operator Details Page
Alternatively, select the Storage cluster tab and click Create OCS Cluster Service.
Figure 1.5. Storage Cluster tab
On the Create Storage Cluster page, ensure that the following options are selected:
- Leave Select Mode as Internal.
In the Nodes section, for the use of OpenShift Container Storage service, select a minimum of three or a multiple of three worker nodes from the available list.
It is recommended that the worker nodes are spread across three different physical nodes, racks or failure domains for high availability.Note
To find specific worker nodes in the cluster, you can filter nodes on the basis of Name or Label.
- Name allows you to search by name of the node
- Label allows you to search by selecting the predefined label
- Ensure OpenShift Container Storage rack labels are aligned with physical racks in the datacenter to prevent a double node failure at the failure domain level.
For minimum starting node requirements, see Resource requirements section in Planning guide.
- Select localblock from the Storage Class dropdown list.
See Verifying your OpenShift Container Storage installation.
Chapter 2. Verifying OpenShift Container Storage deployment for internal mode
Use this section to verify that OpenShift Container Storage is deployed correctly.
2.1. Verifying the state of the pods
To determine if OpenShift Container storage is deployed successfully, you can verify that the pods are in
Running state.
Procedure
- Click Workloads → Pods from the left pane of the OpenShift Web Console.
Select openshift-storage from the Project drop down list.
For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see Table 2.1, “Pods corresponding to OpenShift Container storage cluster”.
Verify that the following pods are in running and completed state by clicking on the Running and the Completed tabs:
Table 2.1. Pods corresponding to OpenShift Container storage cluster
2.2. Verifying the OpenShift Container Storage cluster is healthy
You can verify health of OpenShift Container Storage cluster using the persistent storage dashboard. For more information, see Monitoring OpenShift Container Storage.
- Click Home → Overview from the left pane of the OpenShift Web Console and click Persistent Storage tab.
In the Status card, verify that OCS Cluster has a green tick mark as shown in the following image:
Figure 2.1. Health status card in Persistent Storage Overview Dashboard
In the Details card, verify that the cluster information is displayed appropriately as follows:
Figure 2.2. Details card in Persistent Storage Overview Dashboard
2.3. Verifying the Multicloud Object Gateway is healthy
You can verify the health of the OpenShift Container Storage cluster using the object service dashboard. For more information, see Monitoring OpenShift Container Storage.
- Click Home → Overview from the left pane of the OpenShift Web Console and click the Object Service tab.
In the Status card, verify that the Multicloud Object Gateway (MCG) storage displays a green tick icon as shown in following image:
Figure 2.3. Health status card in Object Service Overview Dashboard
In the Details card, verify that the MCG information is displayed appropriately as follows:
Figure 2.4. Details card in Object Service Overview Dashboard
2.4. Verifying that the OpenShift Container Storage specific storage classes exist
To verify the storage classes exists in the cluster:
- Click Storage → Storage Classes from the left pane
Chapter 3. Uninstalling OpenShift Container Storage
3.1. Uninstalling OpenShift Container Storage on Internal mode
Use the steps in this section to uninstall OpenShift Container Storage instead of the Uninstall option from the user interface.
Prerequisites
- Make sure that the OpenShift Container Storage cluster is in a healthy state. The deletion might fail if some of the pods are not terminated successfully due to insufficient resources or nodes. In case the cluster is in an unhealthy state, you should contact Red Hat Customer Support before uninstalling OpenShift Container Storage.
- Make sure that applications are not consuming persistent volume claims (PVCs) or object bucket claims (OBCs) using the storage classes provided by OpenShift Container Storage. PVCs and OBCs will be deleted during the uninstall process.
Procedure
Query for PVCs and OBCs that use the OpenShift Container Storage based storage class provisioners.
For example :
$ oc get pvc -o=jsonpath='{range .items[?(@.spec.storageClassName=="ocs-storagecluster-ceph-rbd")]}{"Name: "}{@.metadata.name}{" Namespace: "}{@.metadata.namespace}{" Labels: "}{@.metadata.labels}{"\n"}{end}' --all-namespaces|awk '! ( /Namespace: openshift-storage/ && /app:noobaa/ )' | grep -v noobaa-default-backing-store-noobaa-pvc
$ oc get pvc -o=jsonpath='{range .items[?(@.spec.storageClassName=="ocs-storagecluster-cephfs")]}{"Name: "}{@.metadata.name}{" Namespace: "}{@.metadata.namespace}{"\n"}{end}' --all-namespaces
$ oc get obc -o=jsonpath='{range .items[?(@.spec.storageClassName=="ocs-storagecluster-ceph-rgw")]}{"Name: "}{@.metadata.name}{" Namespace: "}{@.metadata.namespace}{"\n"}{end}' --all-namespaces
$ oc get obc -o=jsonpath='{range .items[?(@.spec.storageClassName=="openshift-storage.noobaa.io")]}{"Name: "}{@.metadata.name}{" Namespace: "}{@.metadata.namespace}{"\n"}{end}' --all-namespaces
Follow these instructions to ensure that the PVCs and OBCs listed in the previous step are deleted.
If you have created PVCs as a part of configuring the monitoring stack, cluster logging operator, or image registry, then you must perform the clean up steps provided in the following sections as required:
- Section 3.2, “Removing monitoring stack from OpenShift Container Storage”
- Section 3.3, “Removing OpenShift Container Platform registry from OpenShift Container Storage”
Section 3.4, “Removing the cluster logging operator from OpenShift Container Storage”
For each of the remaining PVCs or OBCs, follow the steps mentioned below :
- Determine the pod that is consuming the PVC or OBC.
Identify the controlling API object such as a
Deployment,
StatefulSet,
DaemonSet,
Job, or a custom controller.
Each API object has a metadata field known as
OwnerReference. This is a list of associated objects. The
OwnerReferencewith the
controllerfield set to true will point to controlling objects such as
ReplicaSet,
StatefulSet,
DaemonSetand so on.
Ensure that the API object is not consuming PVC or OBC provided by OpenShift Container Storage. Either the object should be deleted or the storage should be replaced. Ask the owner of the project to make sure that it is safe to delete or modify the object.Note
You can ignore the
noobaapods.
Delete the OBCs.
$ oc delete obc <obc name> -n <project name>
Delete any custom Bucket Class you have created.
$ oc get bucketclass -A | grep -v noobaa-default-bucket-class
$ oc delete bucketclass <bucketclass name> -n <project-name>
If you have created any custom Multi Cloud Gateway backingstores, delete them.
List and note the backingstores.
for bs in $(oc get backingstore -o name -n openshift-storage | grep -v noobaa-default-backing-store); do echo "Found backingstore $bs"; echo "Its has the following pods running :"; echo "$(oc get pods -o name -n openshift-storage | grep $(echo ${bs} | cut -f2 -d/))"; done
Delete each of the backingstores listed above and confirm that the dependent resources also get deleted.
for bs in $(oc get backingstore -o name -n openshift-storage | grep -v noobaa-default-backing-store); do echo "Deleting Backingstore $bs"; oc delete -n openshift-storage $bs; done
If any of the backingstores listed above were based on the pv-pool, ensure that the corresponding pod and PVC are also deleted.
$ oc get pods -n openshift-storage | grep noobaa-pod | grep -v noobaa-default-backing-store-noobaa-pod
$ oc get pvc -n openshift-storage --no-headers | grep -v noobaa-db | grep noobaa-pvc | grep -v noobaa-default-backing-store-noobaa-pvc
Delete the remaining PVCs listed in Step 1.
$ oc delete pvc <pvc name> -n <project-name>
List and note the backing local volume objects. If there are no results, skip steps 7 and 8.
$ for sc in $(oc get storageclass|grep 'kubernetes.io/no-provisioner' |grep -E $(oc get storagecluster -n openshift-storage -o jsonpath='{ .items[*].spec.storageDeviceSets[*].dataPVCTemplate.spec.storageClassName}' | sed 's/ /|/g')| awk '{ print $1 }'); do echo -n "StorageClass: $sc "; oc get storageclass $sc -o jsonpath=" { 'LocalVolume: ' }{ .metadata.labels['local\.storage\.openshift\.io/owner-name'] } { '\n' }"; done
Example output:
StorageClass: localblock LocalVolume: local-block
Delete the
StorageClusterobject and wait for the removal of the associated resources.
$ oc delete -n openshift-storage storagecluster --all --wait=true
Delete the namespace and wait till the deletion is complete. You will need to switch to another project if openshift-storage is the active project.
Switch to another namespace if openshift-storage is the active namespace.
For example :
$ oc project default
Delete the openshift-storage namespace.
$ oc delete project openshift-storage --wait=true --timeout=5m
Wait for approximately five minutes and confirm if the project is deleted successfully.
$ oc get project openshift-storage
Output:
Error from server (NotFound): namespaces "openshift-storage" not foundNote
While uninstalling OpenShift Container Storage, if namespace is not deleted completely and remains in Terminating state, perform the steps in the article Troubleshooting and deleting remaining resources during Uninstall to identify objects that are blocking the namespace from being terminated.
Clean up the storage operator artifacts on each node.
$ for i in $(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/${i} -- chroot /host rm -rfv /var/lib/rook; done
Ensure you can see removed directory
/var/lib/rookin the output.
Confirm that the directory no longer exists
$ for i in $(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/${i} -- chroot /host ls -l /var/lib/rook; done
Delete the local volume created during the deployment and repeat for each of the local volumes listed in step 3.
For each of the local volumes, do the following:
Set the variable
LVto the name of the LocalVolume and variable
SCto the name of the StorageClass listed in Step 3.
For example:
$ LV=local-block
$ SC=localblock
List and note the devices to be cleaned up later.
$ oc get localvolume -n local-storage $LV -o jsonpath='{ .spec.storageClassDevices[*].devicePaths[*] }'
Example output:
/dev/disk/by-id/nvme-xxxxxx /dev/disk/by-id/nvme-yyyyyy /dev/disk/by-id/nvme-zzzzzz ...
Wipe the disks for each of the local volumes listed in step 3 so that they can be reused.
List the storage nodes.
$ oc get nodes -l cluster.ocs.openshift.io/openshift-storage=
Example output:
NAME STATUS ROLES AGE VERSION node-xxx Ready worker 4h45m v1.18.3+6c42de8 node-yyy Ready worker 4h46m v1.18.3+6c42de8 node-zzz Ready worker 4h45m v1.18.3+6c42de8
Obtain the node console and execute
chroot /hostcommand when the prompt appears.
$ oc debug node/node-xxx Starting pod/node-xxx-debug ... To use host binaries, run `chroot /host` Pod IP: w.x.y.z If you don't see a command prompt, try pressing enter. sh-4.2# chroot /host
Store the disk paths gathered in step 7(ii) in the
DISKSvariable within quotes.
sh-4.2# DISKS="/dev/disk/by-id/nvme-xxxxxx /dev/disk/by-id/nvme-yyyyyy /dev/disk/by-id/nvme-zzzzzz"
Run
sgdisk --zap-allon all the disks.
sh-4.4# for disk in $DISKS; do sgdisk --zap-all $disk;done
Example output:
Problem opening /dev/disk/by-id/nvme-xxxxxx for reading! Error is 2. The specified file does not exist! Problem opening '' for writing! Program will now terminate. Warning! MBR not overwritten! Error is 2! Problem opening /dev/disk/by-id/nvme-yyyyy for reading! Error is 2. The specified file does not exist! Problem opening '' for writing! Program will now terminate. Warning! MBR not overwritten! Error is 2! Creating new GPT entries. GPT data structures destroyed! You may now partition the disk using fdisk or other utilities. NOTE Ignore file-not-found warnings as they refer to disks that are on other machines.
Exit the shell and repeat for the other nodes.
sh-4.4# exit exit sh-4.2# exit exit Removing debug pod ...
Delete the
openshift-storage.noobaa.iostorage class.
$ oc delete storageclass openshift-storage.noobaa.io --wait=true --timeout=5m
Unlabel the storage nodes.
$ oc label nodes --all cluster.ocs.openshift.io/openshift-storage-
$ oc label nodes --all topology.rook.io/rack-Note
You can ignore the warnings displayed for the unlabeled nodes such as label <label> not found.
Confirm all PVs are deleted. If there is any PV left in the Released state, delete it.
# oc get pv | egrep 'ocs-storagecluster-ceph-rbd|ocs-storagecluster-cephfs'
# oc delete pv <pv name>usterinitializations.ocs.openshift.io storageclusters.ocs.openshift.io cephclients.ceph.rook.io --wait=true --timeout=5m
To ensure that OpenShift Container Storage is uninstalled completely, on the OpenShift Container Platform Web Console,
- Click Home → Overview to access the dashboard.
- Verify that the Persistent Storage and Object Service tabs no longer appear next to the Cluster tab.
3.2. Removing monitoring stack from OpenShift Container Storage
Use this section to clean up monitoring stack from OpenShift Container Storage.
The PVCs that are created as a part of configuring the monitoring stack are in the
openshift-monitoring namespace.
Prerequisites
PVCs are configured to use OpenShift Container Platform monitoring stack.
For
3.4. Removing the cluster logging operator from OpenShift Container Storage
Use this section to clean up the cluster logging operator from OpenShift Container Storage.
The PVCs that are created as a part of configuring cluster logging operator are in
openshift-logging namespace.
Prerequisites
- The cluster logging instance should have been
|
https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/4.5/html-single/deploying_openshift_container_storage_using_bare_metal_infrastructure/index
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
Contents
With C++17, we get class template argument deduction. It is based on template argument deduction for function templates and allows us to get rid of the need for clumsy
make_XXX functions.
The problem
Template argument deduction for function templates has been around since before the C++98 standard. It allows us to write cleaner and less verbose code. For example, in
int m = std::max(22, 54); it is pretty obvious that we are calling
std::max<int> here and not
std::max<double> or
std::max<MyClass>. In other contexts, we do not really care too much about the concrete template argument types or they might be impossible to type:
Point rightmost = *std::max_element( std::begin(all_points), std::end(all_points), [](Point const& p1, Point const& p2) { return p2.x > p1.x; }
);
Here, we have
std::max_element<Iter, Compare> – and we don’t care what kind of iterator
Iter is, and we can’t specify the type of
Comp because we used a lambda.
With
auto we got even more capabilities for the compiler to deduce types for variables and function return types in C++11 and C++14.
However, what has been missing from the start is class template argument deduction. When we created, for example, a new
std::pair of things we had to explicitly say what kind of pair it was, e.g.
std::pair<int, double> myPair(22, 43.9);
The common workaround for this problem has been to provide a
make_XXX function that uses function template argument deduction to determine the class template argument types. The above example then could be written as
auto myPair = std::make_pair(22, 43.9);
However, this requires the use of a function that has a different name, which is rather clumsy. Authors of class templates might or might not have written those functions, and, of course, writing those functions by hand is boilerplate that brings nothing but the chance to introduce bugs.
C++17 solves the issue by introducing automated and user defined class template argument deduction. Now we can just do the above by simply writing
std::pair myPair{22, 43.9};.
How it works
The basis for class template argument deduction is, again, function template argument deduction. If an object is created using a template name, but without specifying any template parameters, the compiler builds an imaginary set of “constructor function templates” called deduction guides and uses the usual overload resolution and argument deduction rules for function templates.
Object creation may occur as shown above for the pair, or vía function style construction like
myMap.insert(std::pair{"foo"s, 32});, or in a new expression. Those deduction guides are not actually created or called – it’s only a concept for how the compiler picks the right template parameters and constructor for the creation of the object.
The set of deduction guides consists of some automatically generated ones and – optionally – some user-defined ones.
Automatic deduction guides
The compiler basically generates one deduction guide for each constructor of the primary class template. The template parameters of the imaginary constructor function template are the class template parameters plus any template parameters the constructor might have. The function parameters are used as the are. For
std::pair some of those imaginary function templates would then look like this:
template <class T1, class T2> constexpr auto pair_deduction_guide() -> std::pair<T1, T2>; template <class T1, class T2> auto pair_deduction_guide(std::pair<T1, T2> const& p) -> std::pair<T1, T2>; template <class T1, class T2> constexpr auto pair_deduction_guide(T1 const& x, T2 const& y) -> std::pair<T1, T2>; template <class T1, class T2, class U1, class U2> constexpr auto pair_deduction_guide(U1&& x, U2&& y) -> std::pair<T1, T2>; template <class T1, class T2, class U1, class U2> constexpr auto pair_deduction_guide(std::pair<U1, U2> const& p) -> std::pair<T1, T2>; //etc...
The first deduction guide would be the one generated from
pair‘s default constructor. The second from the copy constructor, and the third from the constructor that copies arguments of the exact right types. This is the one that makes
std::make_pair pretty much obsolete. The fourth is generated from the constructor that converts arguments to
T1 and
T2 and so on.
Of the four deduction guides shown, all would be generated and considered for overload resolution, but only the second and third would ever be actually used. The reason is that for the others, the compiler would not be able to deduce
T1 and
T2 – and explicitly providing them would turn off class argument deduction and we’re back to the old days.
There are two deduction guides that may be generated even if the corresponding constructor does not exist: If the primary template does not have any constructors or is not defined at all, a deduction guide for what would be the default constructor is generated. In addition, the compiler will always generate a copy deduction guide. The latter makes sense if you think about a class similar to this:
template <class T> struct X { T t; X(T const& t_) : t{t_} {} }; X x{22}; // -> X<int> X x2{x};
Without the copy deduction guide, there could be cases where
x2 would not be deduced as a copy of
x which it obviously should be, but as a
X<X<int>>, wrapping a copy of
x.
Note: Automatic deduction guides are only generated for constructors of the primary template. That means if you have partial or full template specializations that provide additional constructors, they will not be considered. If you want to add them to the set of deduction guides, you have to write them manually.
User-defined deduction guides
User defined deduction guides have to be defined in the same scope as the class template they apply to. They look pretty similar to the pseudo code I wrote above for the automatic guides. A user-defined version of the deduction guide that replaces
make_pair would have to be written like this:
namespace std { // ... template<class T1, class T2> pair(T1 const&, T2 const&) -> pair<T1, T2>; }
They look pretty much like a function signature with trailing return type, but without the
auto return type – which could be considered consistent with the syntax of constructors which don’t have a return type either.
There is not much more surprising to user-defined deduction guides. We can’t write a function body since they are not actual functions but only hints which constructor of which class template instantiation to call. One thing to note is that they don’t need to be templates. For example, the following guide could make sense:
template <class T> class Element { //... public: Element(T const&); }; //don't wrap C-strings in Elements... Element(char const*) -> Element<std::string>;
A popular example for user-defined deduction guides are range constructors for standard containers, e.g.
std::set:
template <class Iter> std::set<T, Allocator>::set(Iterfirst, Iterlast, Allocator const& alloc = Allocator());
The automatic deduction guide for this constructor will not work since the compiler can not deduce
T. With user-defined deduction guides, the standard library can help out. It will look something like this:
template <class Iter, class Allocator> set(Iter, Iter, Allocator const&) -> set<typename std::iterator_traits<Iter>::value_type, Allocator>;
The C++17 standard library provides a lot of sensible deduction guides like this one.
Conclusion
With class template argument deduction, the C++17 standard closes a gap in our toolbox to write simple, yet type-safe code. The need for
make_XXX workaround functions is gone (this does not apply to
make_unique and
make_shared which do something different).
How often should we rely on class template argument deduction? Time will tell what the best practices are, but my guess is that it will be similar to template argument deduction for functions: Use it by default, only explicitly specify template parameters when they can’t be deduced or when not using them would make the code unclear.
11 Comments
Permalink
Permalink
Permalink
“the compiler will always generate a copy deduction guide”
Is there a move deduction guide too?
Permalink
No. The actual deduction guide could be thought of as
template <class... Ts>
X(X<Ts...>) -> X<Ts...>
Without any ref on the argument. The actual constructor call will then be something different and be resolved according to the actual arguments.
Permalink
Sounds like that (almost) makes std::experimental::make_array() obsolete also.
Permalink
Yes.
std::arraynow has a “user”-defined deduction rule which will give an
std::array<std::common_type<Args...>>(modulo decaying the arguments, but you get the gist)
Permalink
Beware that there are cases when std::make_pair and std::pair with the same arguments actually create two different types of a pair. See this SO answer for more details:
Permalink
What is it that
make_uniqueand
make_shareddo differently?
Permalink
Both allocate memory and do not deduce the return type. In fact, you have to explicitly provide the template parameter of the returned smart pointer.
Permalink
Is it not also the case that they are more exception-safe? I’m not clear on the details (I don’t use exceptions much), but I heard that as reasoning 🙂
Permalink
The exception safety comes mostly from the smart pointer they return, but they tie the memory allocation and binding that memory to the smarty pointer into a single step, so you don’t have to do ity manually.
|
https://arne-mertz.de/2017/06/class-template-argument-deduction/
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
Closed Bug 1449068 Opened 4 years ago Closed 4 years ago
Port ns
CSSCounter Style Rule to be backed by Servo data
Categories
(Core :: CSS Parsing and Computation, enhancement)
Tracking
()
mozilla61
People
(Reporter: xidorn, Assigned: xidorn)
References
(Blocks 1 open bug)
Details
Attachments
(5 files, 1 obsolete file)
These two rules currently store data in C++ side rather than Rust side, this is a problem because we need to maintain an extra descriptor-to-string table in both sides. (Note: descriptor lists cannot be merged anyway, but the map with string can.) Also currently we cannot strip the serialization code for specified value because of these rules. Once we change them, we should be able to remove majority of serialization code in nsCSSValue, which is quite a bit.
Comment on attachment 8964820 [details] Bug 1449068 WIP - Use Servo data to back @counter-style rule. ::: servo/ports/geckolib/glue.rs:4786 (Diff revision 1) > #[no_mangle] > -pub extern "C" fn Servo_StyleSet_GetCounterStyleRule( > +pub unsafe extern "C" fn Servo_StyleSet_GetCounterStyleRule( > raw_data: RawServoStyleSetBorrowed, > name: *mut nsAtom, > -) -> *mut nsCSSCounterStyleRule { > +) -> RawServoCounterStyleRuleBorrowedOrNull { > let data = PerDocumentStyleData::from_ffi(raw_data).borrow(); > - > - unsafe { > - Atom::with(name, |name| { > + Atom::with(name, |name| { > - data.stylist > + data.stylist > - .iter_extra_data_origins() > + .iter_extra_data_origins() > - .filter_map(|(d, _)| d.counter_styles.get(name)) > + .filter_map(|(d, _)| d.counter_styles.get(name)) > - .next() > + .next() > + .map(|rule| rule.as_borrowed()) > - }) > + }) > - }.map(|rule| { > - let global_style_data = &*GLOBAL_STYLE_DATA; > - let guard = global_style_data.shared_lock.read(); > - rule.read_with(&guard).get() > - }).unwrap_or(ptr::null_mut()) > } So this is the function that currently doesn't compile, and it's not yet clear to me what's wrong with the lifetime here. Because of this, this patch is not tested, and I would write more detailed commit message (and probably split some part off) later. It is just pushed so that I can get some help with this function :/
So I think the problem here is that, borrow() breaks the lifetime from raw_data, so data and anything it derefs to can only live in the function scope, and the final returned value has the same lifetime as data, not raw_data, but from the function signature, the compiler expects the returned value has the same lifetime as raw_data. It is not clear to me what is the right way to solve this problem. It is probably a violation of lifetime from the beginning given that we use AtomicRefCell there. It seems we actually never return a Borrowed from any Servo_* function. We either return strong reference, or raw pointer. Probably just use raw pointer this time is enough, I guess...
Comment on attachment 8964818 [details] Remove initial value of @counter-style rule descriptors. ::: servo/components/style/counter_style/mod.rs:175 (Diff revision 1) > &self.name > } > > + /// Get the system of this counter style rule, default to > + /// `symbolic` if not specified. > + pub fn get_system(&self) -> &System { maybe "resolved_system"? It's somewhat weird that `system` returns an `Option<..>` and `get_system` a value.
Attachment #8964818 - Flags: review?(emilio) → review+
Comment on attachment 8964819 [details] Add source_location to CounterStyleRule. ::: servo/components/style/counter_style/mod.rs:163 (Diff revision 1) > + /// Line and column of the @counter-style rule source code. > + pub source_location: SourceLocation, > } > > impl CounterStyleRuleData { > - fn empty(name: CustomIdent) -> Self { > + fn empty(name: CustomIdent, location: SourceLocation) -> Self { nit: If you call the argument `source_location` you can use the shorthand syntax (and you can do it for the `name` if you want, while at it).
Attachment #8964819 - Flags: review?(emilio) → review+
Comment on attachment 8964847 [details] Bug 1449068 part 1 - Wrap content of ServoStyleSetInlines.h in mozilla namespace. Huh, how did this ever work, I guess we always include this file somewhere where we also have a `using namespace mozilla` or something?
Attachment #8964847 - Flags: review?(emilio) → review+
Comment on attachment 8964848 [details] Bug 1449068 part 2 - Use Servo data to back @counter-style rule. r=me, reluctantly, because what we were doing is basically the same... Though if you don't go the path of requiring a strong reference instead, please do file a bug. ::: layout/style/CounterStyleManager.cpp:2041 (Diff revision 2) > > // Names are compared case-sensitively here. Predefined names should > // have been lowercased by the parser. > ServoStyleSet* styleSet = mPresContext->StyleSet(); > - nsCSSCounterStyleRule* rule = styleSet->CounterStyleRuleForName(aName); > + const RawServoCounterStyleRule* > + rule = styleSet->CounterStyleRuleForName(aName); nit: I think we usually don't split assignments in this way and put the variable name and `=` in the upper line. But it's not a big deal. ::: layout/style/ServoCounterStyleRule.cpp:17 (Diff revision 2) > +#include "nsStyleUtil.h" > + > +namespace mozilla { > + > +ServoCounterStyleRule::~ServoCounterStyleRule() > +{ Maybe `= default` in the header? ::: layout/style/ServoCounterStyleRule.cpp:30 (Diff revision 2) > +} > + > +bool > +ServoCounterStyleRule::IsCCLeaf() const > +{ > + return Rule::IsCCLeaf(); Just don't override it? ::: servo/ports/geckolib/glue.rs:4788 (Diff revision 2) > } > > +// XXX Ideally this should return a RawServoCounterStyleRuleBorrowedOrNull, > +// but we cannot, because the value from AtomicRefCell::borrow() can only > +// live in this function, and thus anything derived from it cannot get the > +// same lifetime as raw_data in parameter. Hmm, so this is slightly annoying, because it's pretty much leaking the pointer outside of the borrow... But that's pretty much what we did anyway, on the other hand. We don't have any way to represent this in C++. I guess the alternative is returning a strong reference, right? Would that be better, given this code is probably not super-hot? I'd really prefer not doing this, but if you do, can you file a bug and reference it from here so we can track it?
Attachment #8964848 - Flags: review?(emilio) → review+
Comment on attachment 8964848 [details] Bug 1449068 part 2 - Use Servo data to back @counter-style rule. > Maybe `= default` in the header? That would require extra dependency to ServoBindingTypes.h... which is probably not too bad, though. > Just don't override it? You can't. This method has an annotation on `css::Rule` to force overriding it in subclass.
Pushed by xquan@mozilla.com: part 1 - Wrap content of ServoStyleSetInlines.h in mozilla namespace. r=emilio part 2 - Use Servo data to back @counter-style rule. r=emilio
Status: NEW → RESOLVED
Closed: 4 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla61
|
https://bugzilla.mozilla.org/show_bug.cgi?id=1449068
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
This page documents how Doxygen is set up in the GROMACS source tree, as well as guidelines for adding new Doxygen comments. Examples are included, as well as tips and tricks for avoiding Doxygen warnings. The guidelines focus on C++ code and other new code that follows the new module layout. Parts of the guidelines are still applicable to documenting older code (e.g., within gmxlib/ or mdlib/), in particular the guidelines about formatting the Doxygen comments and the use of \internal. See Documentation organization for the overall structure of the documentation.
To get started quickly, you only need to read the first two sections to understand the overall structure of the documentation, and take a look at the examples at the end. The remaining sections provide the details for understanding why the examples are the way they are, and for more complex situations. They are meant more as a reference to look up solutions for particular problems, rather than single-time reading. To understand or find individual Doxygen commands, you should first look at Doxygen documentation ().
The GROMACS source tree is set up to produce three different levels of Doxygen documentation:
Each subsequent level of documentation includes all the documentation from the levels above it. The suffixes above refer to the suffixes of Doxygen input and output files, as well as the name of the output directory. When all the flavors have been built, the front pages of the documentation contain links to the other flavors, and explain the differences in more detail.
As a general guideline, the public API documentation should be kept free of anything that a user linking against an unmodified GROMACS does not see. In other words, the public API documentation should mainly document the contents of installed headers, and provide the necessary overview of using those. Also, verbosity requirements for the public API documentation are higher: ideally, readers of the documentation could immediately start using the API based on the documentation, without any need to look at the implementation.
Similarly, the library API documentation should not contain things that other modules in GROMACS can or should never call. In particular, anything declared locally in source files should be only available in the full documentation. Also, if something is documented, and is not identified to be in the library API, then it should not be necessary to call that function from outside its module.
If you simply want to see up-to-date documentation, you can go to to see the documentation for the current development version. Jenkins also runs Doxygen for all changes pushed to Gerrit for release-5-0 and master branches, and the resulting documentation can be viewed from the link posted by Jenkins. The Doxygen build is marked as unstable if it introduces any Doxygen warnings.
You may need to build the documentation locally if you want to check the results after adding/modifying a significant amount of comments. This is recommended in particular if you do not have much experience with Doxygen. It is a good idea to build with all the different settings to see that the result is what you want, and that you do not produce any warnings. For local work, it is generally a good idea to set GMX_COMPACT_DOXYGEN=ON CMake option, which removes some large generated graphs from the documentation and speeds up the process significantly.
All files related to Doxygen reside in the docs/doxygen/ subdirectory in the source and build trees. In a freshly checked out source tree, this directory contains various Doxyfile-*.cmakein files. When you run CMake, corresponding files Doxyfile-user, Doxyfile-lib, and Doxyfile-full are generated at the corresponding location in the build tree. There is also a Doxyfile-common.cmakein, which is used to produce Doxyfile-common. This file contains settings that are shared between all the input files. Doxyfile-compact provides the extra settings for GMX_COMPACT_DOXYGEN=ON.
You can run Doxygen directly with one of the generated files (all output will be produced under the current working directory), or build one of the doxygen-user, doxygen-lib, and doxygen-full targets. The targets run Doxygen in a quieter mode and only show the warnings if there were any, and put the output under docs/html/doxygen/ in the build tree, so that the Doxygen build cooperates with the broader webpage target. The doxygen-all target builds all three targets with less typing.
The generated documentation is put under html-user/, html-lib/, and/or html-full/. Open index.xhtml file from one of these subdirectories to start browsing (for GROMACS developers, the html-lib/ is a reasonable starting point). Log files with all Doxygen warnings are also produced as docs/doxygen/doxygen-*.log, so you can inspect them after the run.
You will need Doxygen 1.8.5 to build the current documentation. Other versions may work, but likely also produce warnings. Additionally, graphviz and mscgen are required for some graphs in the documentation, and latex for formulas. Working versions are likely available through most package managers. It is possible to build the documentation without these tools, but you will see some errors and the related figures will be missing from the documentation.
Doxygen provides quite a few different alternative styles for documenting the source code. There are subtleties in how Doxygen treats the different types of comments, and this also depends somewhat on the Doxygen configuration. It is possible to change the meaning of a comment by just changing the style of comment it is enclosed in. To avoid such issues, and to avoid needing to manage all the alternatives, a single style throughout the source tree is preferable. When it comes to treatment of styles, GROMACS uses the default Doxygen configuration with one exception: JAVADOC_AUTOBRIEF is set ON to allow more convenient one-line brief descriptions in C code.
Majority of existing comments in GROMACS uses Qt-style comments (/*! and //! instead of /** and ///, \brief instead of @brief etc.), so these should be used also for new documentation. There is a single exception for brief comments in C code; see below.
Similarly, existing comments use /*! for multiline comments in both C and C++ code, instead of using multiple //! lines for C++. The rationale is that since the code will be a mixture of both languages for a long time, it is more uniform to use similar style in both. Also, since files will likely transition from C to C++ gradually, rewriting the comments because of different style issues should not generally be necessary. Finally, multi-line //! comments can work differently depending on Doxygen configuration, so it is better to avoid that ambiguity.
When adding comments, ensure that a short brief description is always produced. This is used in various listings, and should briefly explain the purpose of the method without unnecessarily expanding those lists. The basic guideline is to start all comment blocks with \brief (possibly after some other Doxygen commands). If you want to avoid the \brief for one-liners, you can use //!, but the description must fit on a single line; otherwise, it is not interpreted as a brief comment. Note in particular that a simple /*! without a \brief does not produce a brief description. Also note that \brief marks the whole following paragraph as a brief description, so you should insert an empty line after the intended brief description.
In C code, // comments must be avoided because some compilers do not like them. If you want to avoid the \brief for one-liners in C code, use /** instead of //!. If you do this, the brief description should not contain unescaped periods except at the end. Because of this, you should prefer //! in C++ code.
Put the documentation comments in the header file that contains the declaration, if such a header exists. Implementation-specific comments that do not influence how a method is used can go into the source file, just before the method definition, with an \internal tag in the beginning of the comment block. Doxygen-style comments within functions are not generally usable.
At times, you may need to exclude some part of a header or a source file such that Doxygen does not see it at all. In general, you should try to avoid this, but it may be necessary to remove some functions that you do not want to appear in the public API documentation, and which would generate warnings if left undocumented, or to avoid Doxygen warnings from code it does not understand. Prefer \cond and \endcond to do this. If \cond does not work for you, you can also use #ifndef DOXYGEN. If you exclude a class method in a header, you also need to exclude it in the source code to avoid warnings.
The general guidelines on the style of Doxygen comments were given above. This section introduces GROMACS specific constructs currently used in Doxygen documentation, as well as how GROMACS uses Doxygen groups to organize the documentation.
Some consistency checks are done automatically using custom scripts. See Source tree checker scripts for details.
To control in which level of documentation a certain function appears, three different mechanisms are used:
Examples of locations where it is necessary to use these explicit commands are given below in the sections on individual code constructs.
As described in Source code organization, each subdirectory under src/gromacs/ represents a module, i.e., a somewhat coherent collection of routines. Doxygen cannot automatically generate a list of routines in a module; it only extracts various alphabetical indexes that contain more or less all documented functions and classes. To help reading the documentation, the routines for a module should be visible in one place.
GROMACS uses Doxygen groups to achieve this: for each documented module, there is a \defgroup definition for the module, and all the relevant classes and functions need to be manually added to this group using \ingroup and \addtogroup. The group page also provides a natural place for overview documentation about the module, and can be navigated to directly from the “Modules” tab in the generated documentation.
Some notes about using \addtogroup are in order:
In addition to the module groups, two fixed groups are provided: group_publicapi and group_libraryapi. Classes and files can be added to these groups using GROMACS specific custom \inpublicapi and \inlibraryapi commands. The generated group documentation pages are not very useful, but annotated classes and files show the API definition under the name, making this information more easily accessible. These commands in file-level comments are also used for some automatic intermodule dependency validation (see below).
Note that functions, enumerations, and other entities that do not have a separate page in the generated documentation can only belong to one group; in such a case, the module group is preferred over the API group.
This section describes the techical details and some tips and tricks for documenting specific code constructs such that useful documentation is produced. If you are wondering where to document a certain piece of information, see the documentation structure section in Documentation organization. The focus of the documentation should be on the overview content: Doxygen pages and the module documentation. An experienced developer can relatively easily read and understand individual functions, but the documentation should help in getting the big picture.
The pages that are accessible through navigation from the front page are written using Markdown and are located under docs/doxygen/. Each page should be placed in the page hierarchy by making it a subpage of another page, i.e., it should be referenced once using \subpage. mainpage.md is the root of the hierarchy.
There are two subdirectories, user/ and lib/, determining the highest documentation level where the page appears. If you add pages to lib/, ensure that there are no references to the page from public API documentation. \if libapi can be used to add references in content that is otherwise public. Generally, the pages should be on a high enough level and provide overview content that is useful enough such that it is not necessary to exclude them from the library API documentation.
For each module, decide on a header file that is the most important one for that module (if there is no self-evident header, it may be better to designate, e.g., module-doc.h for this purpose, but this is currently not done for any module). This header should contain the \defgroup definition for the module. The name of the group should be module_name, where name is the name of the subdirectory that hosts the module.
The module should be added to an appropriate group (see docs/doxygen/misc.cpp for definitions) using \ingroup to organize the “Modules” tab in the generated documentation.
One or more contact persons who know about the contents of the module should be listed using \author commands. This provides a point of contact if one has questions.
Classes and structs in header files appear always in Doxygen documentation, even if their enclosing file is not documented. So start the documentation blocks of classes that are not part of the public API with \internal or \libinternal. Classes declared locally in source files or in unnamed namespaces only appear in the full documentation.
If a whole class is not documented, this does not currently generate any warning. The class is simply exluded from the documentation. But if a member of a documented class is not documented, a warning is generated. Guidelines for documenting free functions apply to methods of a class as well.
For base classes, the API classification (\inpublicapi or \inlibraryapi) should be based on where the class is meant to be subclassed. The visibility (\internal or \libinternal), in contrast, should reflect the API classification of derived classes such that the base class documentation is always generated together with the derived classes.
For classes that are meant to be subclassed and have protected members, the protected members should only appear at the documentation level where the class is meant to be subclassed. For example, if a class is meant to be subclassed only within a module, the protected members should only appear in the full documentation. This can be accomplished using \cond (note that you will need to add the \cond command also to the source files to hide the same methods from Doxygen, otherwise you will get confusing warnings).
These items do not appear in the documentation unless their enclosing scope is documented. For class members, the scope is the class; otherwise, it is the namespace if one exists, or the file. An \addtogroup can also define a scope if the group has higher visibility than the scope outside it. So if a function is not within a namespace (mostly applicable to C code) and has the same visibility as its enclosing file, it is not necessary to add a \internal or \libinternal.
Static functions are currently extracted for all documentation flavors to allow headers to declare static inline functions (used in, for example, math code). Functions in anonymous namespaces are only extracted into the full documentation. Together with the above rules, this means that you should avoid putting a static function within a documented namespace, even within source files, or it may inadvertently appear in the public API documentation.
If you want to exclude an item from the documentation, you need to put in inside a \cond block such that Doxygen does not see it. Otherwise, a warning for an undocumented function is generated. You need to enclose both the declaration and the definition with \cond.
Each documented file should start with a documentation block (right after the copyright notice) that documents the file. See the examples section for exact formatting. Things to note:
Directory documentation does not typically contain useful information beyond a possible brief description, since they correspond very closely to modules, and the modules themselves are documented. A brief description is still useful to provide a high-level overview of the source tree on the generated “Files” page. A reference to the module is typically sufficient as a brief description for a directory. All directories are currently documented in docs/doxygen/directories.cpp.
Here is an example of documenting a C++ class and its containing header file. Comments in the code and the actual documentation explain the used Doxygen constructs.
/*! \libinternal \file * \brief * Declares gmx::MyClass. * * More details. The documentation is still extracted for the class even if * this whole comment block is missing. * * \author Example Author <example@author.com> * \inlibraryapi * \ingroup module_mymodule */ namespace gmx { /*! \libinternal * \brief * Brief description for the class. * * More details. The \libinternal tag is required for classes, since they are * extracted into the documentation even in the absence of documentation for * the enclosing scope. * The \libinternal tag is on a separate line because of a bug in Doxygen * 1.8.5 (only affects \internal, but for clarity it is also worked around * here). * * \inlibraryapi * \ingroup module_mymodule */ class MyClass { public: // Trivial constructors or destructors do not require documentation. // But if a constructor takes parameters, it should be documented like // methods below. MyClass(); ~MyClass(); /*! \brief * Brief description for the method. * * \param[in] param1 Description of the first parameter. * \param[in] param2 Description of the second parameter. * \returns Description of the return value. * \throws std::bad_alloc if out of memory. * * More details describing the method. It is not an error to put this * above the parameter block, but most existing code has it here. */ int myMethod(int param1, const char *param2) const; //! Brief description for the accessor. int simpleAccessor() const { return var_; } /*! \brief * Alternative, more verbose way of specifying a brief description. */ int anotherAccessor() const; /*! \brief * Brief description for another accessor that is so long that it does * not conveniently fit on a single line cannot be specified with //!. */ int secondAccessor() const; private: // Private members (whether methods or variables) are currently ignored // by Doxygen, so they don't need to be documented. Documentation // doesn't hurt, though. int var_; }; } // namespace gmx
Here is another example of documenting a C header file (so avoiding all C++-style comments), and including free functions. It also demonstrates the use of \addtogroup to add multiple functions into a module group without repeated \ingroup tags.
/*! \file * \brief * Declares a collection of functions for performing a certain task. * * More details can go here. * * \author Example Author <example@author.com> * \inpublicapi * \ingroup module_mymodule */ /*! \addtogroup module_mymodule */ /*! \{ */ /*! \brief * Brief description for the data structure. * * More details. * * \inpublicapi */ typedef struct { /** Brief description for member. */ int member; int second; /**< Brief description for the second member. */ /*! \brief * Brief description for the third member. * * Details. */ int third; } gmx_mystruct_t; /*! \brief * Performs a simple operation. * * \param[in] value Input value. * \returns Computed value. * * Detailed description. * \inpublicapi cannot be used here, because Doxygen only allows a single * group for functions, and module_mymodule is the preferred group. */ int gmx_function(int value); /* Any . in the brief description should be escaped as \. */ /** Brief description for this function. */ int gmx_simple_function(); /*! \} */
The rules where Doxygen expects something to be documented, and when are commands like \internal needed, can be complex. The examples below describe some of the pitfalls.
/*! \libinternal \file * \brief * ... * * The examples below assume that the file is documented like this: * with an \libinternal definition at the beginning, with an intent to not * expose anything from the file in the public API. Things work similarly for * the full documentation if you replace \libinternal with \internal * everywhere in the example. * * \ingroup module_example */ /*! \brief * Brief description for a free function. * * A free function is not extracted into the documentation unless the enclosing * scope (in this case, the file) is. So a \libinternal is not necessary. */ void gmx_function(); // Assume that the module_example group is defined in the public API. //! \addtogroup module_example //! \{ //! \cond libapi /*! \brief * Brief description for a free function within \addtogroup. * * In this case, the enclosing scope is actually the module_example group, * which is documented, so the function needs to be explicitly excluded. * \\libinternal does not work, since it would produce warnings about an * undocumented function, so the whole declaration is hidden from Doxygen. */ void gmx_function(); //! \endcond //! \} // For modules that are only declared in the library API, \addtogroup // cannot be used without an enclosing \cond. Otherwise, it will create // a dummy module with the identifier as the name... //! \cond libapi //! \addtogroup module_libmodule //! \{ /*! \brief * Brief description. * * No \libinternal is necessary here because of the enclosing \cond. */ void gmx_function(); //! \} //! \endcond // An alternative to the above is use this, if the enclosing scope is only // documented in the library API: //! \libinternal \addtogroup module_libmodule //! \{ //! Brief description. void gmx_function() //! \} /*! \libinternal \brief * Brief description for a struct. * * Documented structs and classes from headers are always extracted into the * documentation, so \libinternal is necessary to exclude it. * Currently, undocumented structs/classes do not produce warnings, so \cond * is not necessary. */ struct t_example { int member1; //!< Each non-private member should be documented. bool member2; //!< Otherwise, Doxygen will produce warnings. }; // This namespace is documented in the public API. namespace gmx { //! \cond libapi /*! \brief * Brief description for a free function within a documented namespace. * * In this case, the enclosing scope is the documented namespace, * so a \cond is necessary to avoid warnings. */ void gmx_function(); //! \endcond /*! \brief * Class meant for subclassing only within the module, but the subclasses will * be public. * * This base class still provides public methods that are visible through the * subclasses, so it should appear in the public documentation. * But it is not marked with \inpublicapi. */ class BaseClass { public: /*! \brief * A public method. * * This method also appears in the documentation of each subclass in * the public and library API docs. */ void method(); protected: // The \cond is necessary to exlude this documentation from the public // API, since the public API does not support subclassing. //! \cond internal //! A method that only subclasses inside the module see. void methodForSubclassToCall(); //! A method that needs to be implemented by subclasses. virtual void virtualMethodToImplement() = 0; //! \endcond }; } // namespace gmx
Documenting a new module should place a comment like this in a central header for the module, such that the “Modules” tab in the generated documentation can be used to navigate to the module.
/*! \defgroup module_example "Example module (example)" * \ingroup group_utilitymodules * \brief * Brief description for the module. * * Detailed description of the module. Can link to a separate Doxygen page for * overview, and/or describe the most important headers and/or classes in the * module as part of this documentation. * * For modules not exposed publicly, \libinternal can be added at the * beginning (before \defgroup). * * \author Author Name <author.name@email.com> */ // In other code, use \addtogroup module_example and \ingroup module_example to // add content (classes, functions, etc.) onto the module page.
The most common mistake, in particular in C code, is to forget to document the file. This causes Doxygen to ignore most comments in the file, so it does not validate the contents of the comments either, nor is it possible to actually check how the generated documentation looks like.
The following examples show some other common mistakes (and some less common) that do not produce correct documentation, as well as Doxygen “features”/bugs that can be confusing.
The struct itself is not documented; other comments within the declaration are ignored.
struct t_struct { // The comment tries to document both members at once, but it only // applies to the first. The second produces warnings about missing // documentation (if the enclosing struct was documented). //! Angle parameters. double alpha, beta; };
This does not produce any brief documentation. An explicit \brief is required, or //! (C++) or /** */ (C) should be used.
/*! Brief comment. */ int gmx_function();
This does not produce any documentation at all, since a ! is missing at the beginning.
/* \brief * Brief description. * * More details. */ int gmx_function();
This puts the whole paragraph into the brief description. A short description is preferable, separated by an empty line from the rest of the text.
/*! \brief * Brief description. The description continues with all kinds of details about * what the function does and how it should be called. */ int gmx_function();
This may be a Doxygen bug, but this does not produce any brief description.
/** \internal Brief description. */ int gmx_function();
If the first declaration below appears in a header, and the second in a source file, then Doxygen does not associate them correctly and complains about missing documentation for the latter. The solution is to explicitly add a namespace prefix also in the source file, even though the compiler does not require it.
// Header file //! Example function with a namespace-qualified parameter type. int gmx_function(const gmx::SomeClass ¶m); // Source file using gmx::SomeClass; int gmx_function(const SomeClass ¶m);
This puts the namespace into the mentioned module, instead of the contents of the namespace. \addtogroup should go within the innermost scope.
//! \addtogroup module_example //! \{ namespace gmx { //! Function intended to be part of module_example. int gmx_function(); }
More examples you can find by looking at existing code in the source tree. In particular new C++ code such as that in the src/gromacs/analysisdata/ and src/gromacs/options/ subdirectories contains a large amount of code documented mostly along these guidelines. Some comments in src/gromacs/selection/ (in particular, any C-like code) predate the introduction of these guidelines, so those are not the best examples.
|
https://manual.gromacs.org/documentation/5.1.2/dev-manual/doxygen.html
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
getdns reference¶
getdns contexts¶
This section describes the getdns Context object, as well as its as its methods and attributes.
- class
getdns.
Context([set_from_os])¶
Creates a context, an opaque object which describes the environment within which a DNS query executes. This includes namespaces, root servers, resolution types, and so on. These are accessed programmatically through the attributes described below.
Context() takes one optional constructor argument.
set_from_osis an integer and may take the value either 0 or 1. If 1, which most developers will want, getdns will populate the context with default values for the platform on which it’s running.
The
Contextclass has the following public read/write attributes:
append_name¶
Specifies whether to append a suffix to the query string before the API starts resolving a name. Its value must be one of
getdns.APPEND_NAME_ALWAYS,
getdns.APPEND_NAME_ONLY_TO_SINGLE_LABEL_AFTER_FAILURE,
getdns.APPEND_NAME_ONLY_TO_MULTIPLE_LABEL_NAME_AFTER_FAILURE, or
getdns.APPEND_NAME_NEVER. This controls whether or not to append the suffix given by
suffix.
dns_root_servers¶
The value of dns_root_servers is a list of dictionaries containing addresses to be used for looking up top-level domains. Each dict in the list contains two key-value pairs:
- address_data: a string representation of an IPv4 or IPv6 address
- address_type: either the string “IPv4” or “IPv6”
For example, the addresses list could look like
>>> addrs = [ { 'address_data': '2001:7b8:206:1::4:53', 'address_type': 'IPv6' }, ... { 'address_data': '65.22.9.1', 'address_type': 'IPv4' } ] >>> mycontext.dns_root_servers = addrs
dns_transport_list¶
An ordered list of transport options to be used for DNS lookups, ordered by preference (first choice as list element 0, second as list element 1, and so on). The possible values are
getdns.TRANSPORT_UDP,
getdns.TRANSPORT_TCP, and
getdns.TRANSPORT_TLS.
dnssec_allowed_skew¶
Its value is the number of seconds of skew that is allowed in either direction when checking an RRSIG’s Expiration and Inception fields. The default is 0.
dnssec_trust_anchors¶
Its value is a list of DNSSEC trust anchors, expressed as RDATAs from DNSKEY resource records.
edns_client_subnet_private¶
May be set to 0 or 1. When 1, requests upstreams not to reveal query’s originating network.
edns_maximum_udp_payload_size¶
Its value must be an integer between 512 and 65535, inclusive. The default is 512.
follow_redirects¶
Specifies whether or not DNS queries follow redirects. The value must be one of
getdns.REDIRECTS_FOLLOWfor normal following of redirects though CNAME and DNAME; or
getdns.REDIRECTS_DO_NOT_FOLLOWto cause any lookups that would have gone through CNAME and DNAME to return the CNAME or DNAME, not the eventual target.
implementation_string¶
A string describing the implementation of the underlying getdns library, retrieved from libgetdns. Currently ““
limit_outstanding_queries¶
Specifies limit (an integer value) on the number of outstanding DNS queries. The API will block itself from sending more queries if it is about to exceed this value, and instead keep those queries in an internal queue. The a value of 0 indicates that the number of outstanding DNS queries is unlimited.
namespaces¶
The namespaces attribute takes an ordered list of namespaces that will be queried. (Important: this context setting is ignored for the getdns.general() function; it is used for the other functions.) The allowed values are
getdns.NAMESPACE_DNS,
getdns.NAMESPACE_LOCALNAMES,
getdns.NAMESPACE_NETBIOS,
getdns.NAMESPACE_MDNS, and
getdns.NAMESPACE_NIS. When a normal lookup is done, the API does the lookups in the order given and stops when it gets the first result; a different method with the same result would be to run the queries in parallel and return when it gets the first result. Because lookups might be done over different mechanisms because of the different namespaces, there can be information leakage that is similar to that seen with POSIX getaddrinfo(). The default is determined by the OS.
resolution_type¶
Specifies whether DNS queries are performed with nonrecursive lookups or as a stub resolver. The value is either
getdns.RESOLUTION_RECURSINGor
getdns.RESOLUTION_STUB.
If an implementation of this API is only able to act as a recursive resolver, setting resolution_type to
getdns.RESOLUTION_STUBwill throw an exception.
suffix¶
Its value is a list of strings to be appended based on
append_name. The list elements must follow the rules in RFC 4343#section-2.1
tls_authentication¶
The mechanism to be used for authenticating the TLS server when using a TLS transport. May be
getdns.AUTHENTICATION_REQUIREDor
getdns.AUTHENTICATION_NONE. (getdns.AUTHENTICATION_HOSTNAME remains as an alias for getdns.AUTHENTICATION_REQUIRED but is deprecated and will be removed in a future release)
tls_query_padding_blocksize¶
Optional padding blocksize for queries when using TLS. Used to increase the difficulty for observers to guess traffic content.
upstream_recursive_servers¶
A list of dicts defining where a stub resolver will send queries. Each dict in the list contains at least two names: address_type (either “IPv4” or “IPv6”) and address_data (whose value is a string representation of an IP address). It might also contain “port” to specify which port to use to contact these DNS servers; the default is 53. If the stub and a recursive resolver both support TSIG (RFC 2845), the upstream_list entry can also contain tsig_algorithm (a string) that is the name of the TSIG hash algorithm, and tsig_secret (a base64 string) that is the TSIG key.
There is also now support for pinning an upstream’s certificate’s public keys with pinsets (when using TLS for transport). Add an element to the upstream_recursive_server list entry, called ‘tls_pubkey_pinset’, which is a list of public key pins. (See the example code in our examples directory).
The
Contextclass includes public methods to execute a DNS query, as well as a method to return the entire set of context attributes as a Python dictionary.
Contextmethods are described below:
general(name, request_type[, extensions][, userarg][, transaction_id][, callback])¶
Context.general()is used for looking up any type of DNS record. The keyword arguments are:
name: a string containing the query term.
request_type: a DNS RR type as a getdns constant (listed here)
extensions: optional. A dictionary containing attribute/value pairs, as described below
userarg: optional. A string containing arbitrary user data; this is opaque to getdns
transaction_id: optional. An integer.
callback: optional. This is a function name. If it is present the query will be performed asynchronously (described below).
address(name[, extensions][, userarg][, transaction_id][, callback])¶
There are two critical differences between
Context.address()and
Context.general()beyond the missing request_type argument:
- In
Context.address(), the name argument can only take a host name.
Context.address()always uses all of namespaces from the context (to better emulate getaddrinfo()), while
Context.general()only uses the DNS namespace.
hostname(name[, extensions][, userarg][, transaction_id][, callback])¶
The address is given as a dictionary. The dictionary must have two names:
address_type: must be a string matching either “IPv4” or “IPv6”
address_data: a string representation of an IPv4 or IPv6 IP address
service(name[, extensions][, userarg][, transaction_id][, callback])¶
namemust be a domain name for an SRV lookup. The call returns the relevant SRV information for the name
get_api_information()¶
Retrieves context information. The information is returned as a Python dictionary with the following keys:
version_string
implementation_string
resolution_type
all_context
all_contextis a dictionary containing the following keys:
append_name
dns_transport
dnssec_allowed_skew
edns_do_bit
edns_extended_rcode
edns_version
follow_redirects
limit_outstanding_queries
namespaces
suffix
timeout
tls_authentication
upstream_recursive_servers
The
getdns module has the following read-only attribute:
Extensions¶
Extensions are Python dictionaries, with the keys being the names of the
extensions. The definition of each extension describes the values that
may be assigned to that extension. For most extensions it is a Boolean,
and since the default value is “False” it will most often take the value
getdns.EXTENSION_TRUE.
The extensions currently supported by
getdns are:
-
dnssec_return_status
-
dnssec_return_only_secure
-
dnssec_return_validation_chain
-
return_both_v4_and_v6
-
add_opt_parameters
-
add_warning_for_bad_dns
-
specify_class
-
return_call_reporting
Extensions for DNSSEC¶
If an application wants the API to do DNSSEC validation for a request, it must set one or more DNSSEC-related extensions. Note that the default is for none of these extensions to be set and the API will not perform DNSSEC validation. Note that getting DNSSEC results can take longer in a few circumstances.
To return the DNSSEC status for each DNS record in the
replies_tree list, use the
dnssec_return_status
extension. Set the extension’s value to
getdns.EXTENSION_TRUE to cause the returned status to have
the name
dnssec_status added to the other names in
the record’s dictionary (“header”, “question”, and so on). The
potential values for that name are
getdns.DNSSEC_SECURE,
getdns.DNSSEC_BOGUS,
getdns.DNSSEC_INDETERMINATE, and
getdns.DNSSEC_INSECURE.
If instead of returning the status, you want to only see
secure results, use the
dnssec_return_only_secure
extension. The extension’s value. Set the extension’s value to
getdns.EXTENSION_TRUE to cause a set of additional
DNSSEC-related records needed for validation to be returned
in the
response object...
Returning both IPv4 and IPv6 responses¶
Many applications want to get both IPv4 and IPv6 addresses
in a single call so that the results can be processed
together. The
address()
method is able to do this automatically. If you are
using the
general() method,
you can enable this with the
return_both_v4_and_v6
extension. The extension’s value must be set to
getdns.EXTENSION_TRUE to cause the results to be the lookup
of either A or AAAA records to include any A and AAAA
records for the queried name (otherwise, the extension does
nothing). These results are expected to be usable with Happy
Eyeballs systems that will find the best socket for an
application.
Setting up OPT resource records¶
For lookups that need an OPT resource record in the
Additional Data section, use the
add_opt_parameters
extension. The extension’s value (a dict) contains the
parameters; these are described in more detail in
RFC 2671. They are:
-
maximum_udp_payload_size: an integer between 512 and 65535 inclusive. If not specified it defaults to the value in the getdns context.
-
extended_rcode: an integer between 0 and 255 inclusive. If not specified it defaults to the value in the getdns context.
-
version: an integer betwen 0 and 255 inclusive. If not specified it defaults to 0.
-
do_bit: must be either 0 or 1. If not specified it defaults to the value in the getdns context.
-
options: a list containing dictionaries for each option to be specified. Each dictionary contains two keys:
option_code(an integer) and
option_data(in the form appropriate for that option code).
client_subnet.py program in our example directory
shows how to pack and send an OPT record.
Getting Warnings for Responses that Violate the DNS Standard¶
To receive a warning if a particular response violates some
parts of the DNS standard, use the
add_warning_for_bad_dns
extension. The extension’s value is set to
getdns.EXTENSION_TRUE to cause each reply in the
replies_tree to contain an additional name,
bad_dns (a
list). The list is zero or more values that indicate types of
bad DNS found in that reply. The list of values is:
A DNS query type that does not allow a target to be a CNAME pointed to a CNAME
One or more labels in a returned domain name is all-numeric; this is not legal for a hostname
A DNS query for a type other than CNAME returned a CNAME response
Using other class types¶
The vast majority of DNS requests are made with the Internet
(IN) class. To make a request in a different DNS class, use,
the
specify_class extension. The extension’s value (an int)
contains the class number. Few applications will ever use
this extension.
Extensions relating to the API¶
An application might want to see debugging information for
queries, such as the length of time it takes for each query
to return to the API. Use the
return_call_reporting
extension. The extension’s value is set to
getdns.EXTENSION_TRUE to add the name
call_reporting (a
list) to the top level of the
response object. Each member
of the list is a dict that represents one call made for the
call to the API. Each member has the following names:
-
query_nameis the name that was sent
-
query_typeis the type that was queried for
-
query_tois the address to which the query was sent
-
start_timeis the time the query started in milliseconds since the epoch, represented as an integer
-
end_timeis the time the query was received in milliseconds since the epoch, represented as an integer
-
entire_replyis the entire response received
-
dnssec_resultis the DNSSEC status, or
getdns.DNSSEC_NOT_PERFORMEDif DNSSEC validation was not performed
Asynchronous queries¶
The getdns Python bindings support asynchronous queries, in which a query returns immediately and a callback function is invoked when the response data are returned. The query method interfaces are fundamentally the same, with a few differences:
- The query returns a transaction id. That transaction id may be used to cancel future callbacks
- The query invocation includes the name of a callback function. For example, if you’d like to call the function “my_callback” when the query returns, an address lookup could look like>>> c = getdns.Context() >>> tid = c.address('', callback=my_callback)
- We’ve introduced a new
Contextmethod, called
run. When your program is ready to check to see whether or not the query has returned, invoke the run() method on your context. Note that we use the libevent asynchronous event library and an event_base is associated with a context. So, if you have multiple outstanding events associated with a particular context,
runwill invoke all of those that are waiting and ready.
- In previous releases the callback argument took the form of a literal string, but as of this release you may pass in the name of any Python runnable, without quotes. The newer form is preferred.
The callback script takes four arguments:
type,
result,
userarg, and
transaction_id. The ``type
argument contains the callback type, which may have one of
the following values:
-
getdns.CALLBACK_COMPLETE: The query was successful and the results are contained in the
resultargument
-
getdns.CALLBACK_CANCEL: The callback was cancelled before the results were processed
-
getdns.CALLBACK_TIMEOUT: The query timed out before the results were processed
-
getdns.CALLBACK_ERROR: An unspecified error occurred
The
result argument contains a result object, with the
query response
The
userarg argument contains the optional user argument
that was passed to the query at the time it was invoked.
The
transaction_id argument contains the transaction_id
associated with a particular query; this is the same
transaction id that was returned when the query was invoked.
This is an example callback function:
def cbk(type, result, userarg, tid): if type == getdns.CALLBACK_COMPLETE: status = result.status if status == getdns.RESPSTATUS_GOOD: for addr in result.just_address_answers: addr_type = addr['address_type'] addr_data = addr['address_data'] print '{0}: {1} {2}'.format(userarg, addr_type, addr_data) elif status == getdns.RESPSTATUS_NO_SECURE_ANSWERS: print "{0}: No DNSSEC secured responses found".format(hostname) else: print "{0}: getdns.address() returned error: {1}".format(hostname, status) elif type == getdns.CALLBACK_CANCEL: print 'Callback cancelled' elif type == getdns.CALLBACK_TIMEOUT: print 'Query timed out' else: print 'Unknown error'
|
https://getdns.readthedocs.io/en/latest/functions.html
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
ec_httpsrv_response_status_denied
Name
ec_httpsrv_response_status_denied — Sets the HTTP status to indicate a 403 error, and sets content-type
Synopsis
#include "modules/listeners/httpsrv.h"
|
int **ec_httpsrv_response_status_denied** ( | sess, | |
| | type
); | | the HTTP status to indicate a 403 error, and sets content-type.
- sess
the session to modify
- type
the content type to set
0 if successful, else returns an errno indicating what went wrong.
This function is equivalent to setting the status to 403 DENIED and setting the Content-Type: header to the value of type.
|
https://support.sparkpost.com/momentum/3/3-api/apis-ec-httpsrv-response-status-denied
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
How To Migrate From jQuery To Next.js
This article has been kindly supported by our dear friends at Netlify who are a diverse group of incredible talent from all over the world and offers a platform for web developers that multiplies productivity. Thank you!
When jQuery appeared in 2006, a lot of developers and organizations started to adopt it for their projects. The possibility of extending and manipulating the DOM that the library offers is great, and we also have many plugins to add behavior to our pages in case we need to do tasks that aren’t supported by the jQuery main library. It simplified a lot of the work for developers, and, at that moment, it made JavaScript a powerful language to create web applications or Single Page Applications.
The result of jQuery popularity is measurable still today: Almost 80% of the most popular websites of the world still use it. Some of the reasons why jQuery is so popular are:
- It supports DOM manipulation.
- It provides CSS manipulation.
- Works the same on all web browsers.
- It wraps HTML event methods.
- Easy to create AJAX calls.
- Easy to use effects and animations.
Over the years, JavaScript changed a lot and added several features that we didn’t have in the past. With the re-definition and evolution of ECMAScript, some of the functionalities that jQuery provided were added to the standard JavaScript features and supported by all the web browsers. With this happening, some of the behavior jQuery offers was not needed anymore, as we are able to do the same things with plain JavaScript.
On the other hand, a new way of thinking and designing user interfaces started to emerge. Frameworks like React, Angular or Vue allow the developers to create web applications based on reusable functional components. React, i.e., works with the “virtual DOM”, which is a DOM representation in the memory, whereas jQuery interacts directly with the DOM, in a less performant way. Also, React offers cool features to facilitate the development of certain features, such as state management. With this new approach and the popularity that Single Page Applications started to gain, a lot of developers started to use React for their web application projects.
And front end development evolved even more, with frameworks created on top of other frameworks. That is the case, for example, of Next.js. As you probably know, it’s an open-source React framework that offers features to generate static pages, create server-side rendered pages, and combine both types in the same application. It also allows creating serverless APIs inside the same app.
There is a curious scenario: Even though these frontend frameworks are more and more popular over the years, jQuery is still adopted by a vast majority of web pages. One of the reasons why this happens is that the percentage of websites using WordPress is really high, and jQuery is included in the CMS. Another reason is that some libraries, like Bootstrap, have a dependency on jQuery, and there are some ready-to-use templates that use it and its plugins.
But another reason for this amount of websites using jQuery is the cost of migrating a complete web application to a new framework. It’s not easy, it’s not cheap and it’s time-consuming. But, in the end, working with new tools and technologies brings a lot of benefits: wider support, community assistance, better developer experience, and ease of getting people working on the project.
There are many scenarios where we don’t need (or don’t want) to follow the architecture that frameworks like React or Next.js impose on us, and that is OK. However, jQuery is a library that contains a lot of code and features that are not needed anymore. Many of the features jQuery offers can be accomplished using modern JavaScript native functions, and probably in a more performant way.
Let’s discuss how we could stop using jQuery and migrate our website into a React or Next.js web application.
Define The Migration Strategy
Do We Need A Library?
Depending on the features of our web application, we could even have the case where a framework is not really needed. As mentioned before, several jQuery features were included (or at least a very similar one) to the latest web standard versions. So, considering that:
$(selector)pattern from jQuery can be replaced with
querySelectorAll().
Instead of doing:
$("#someId");
We can do:
document.querySelectorAll("#someId");
- We have now the property
Element.classListif we want to manipulate CSS classes.
Instead of doing:
$(selector).addClass(className);
We can do:
element.classList.add(className);
- Many animations can be done directly using CSS, instead of implementing JavaScript.
Instead of doing:
$(selector).fadeIn();
We can do:
element.classList.add('show'); element.classList.remove('hide');
And apply some CSS styling:
.show { transition: opacity 400ms; } .hide { opacity: 0; }
- We have now addEventListener function if we want to handle events.
Instead of doing:
$(selector).on(eventName, eventHandler);
We can do:
element.addEventListener(eventName, eventHandler);
- Instead of using jQuery Ajax, we can use
XMLHttpRequest.
Instead of doing:
$.ajax({ type: 'POST', url: '/the-url', data: data });
We can do:
var request = new XMLHttpRequest(); request.open('POST', '/the-url', true); request.setRequestHeader('Content-Type', 'application/x-www-form-urlencoded; charset=UTF-8'); request.send(data);
For more details, you can take a look at these Vanilla JavaScript Code Snippets.
Identify Components
If we are using jQuery in our application, we should have some HTML content that is generated on the web server, and JavaScript code that adds interactivity to the page. We are probably adding event handlers on page load that will manipulate the DOM when the events happen, probably updating the CSS or the style of the elements. We could also be calling backend services to execute actions, that can affect the DOM of the page, or even reload it.
The idea would be to refactor the JavaScript code living in the pages and build React components. This will help us to join related code and compose elements that will be part of a larger composition. By doing this we will also be able to have better handling of the state of our application. Analyzing the frontend of our application, we should divide it into parts dedicated to a certain task, so we can create components based on that.
If we have a button:
<button id="btn-action">Click</button>
With the following logic:
var $btnAction = $("#btn-action"); $btnAction.on("click", function() { alert("Button was clicked"); });
We can migrate it to a React Component:
import React from 'react'; function ButtonComponent() { let handleButtonClick = () => { alert('Button clicked!') } return <button onClick={handleButtonClick}>Click</button> }
But we should also evaluate how the migration process will be accomplished since our application is working and being used, and we don’t want to affect it (or, at least, affect it as little as possible).
Good Migration
A good migration is one where all the parts of the application are fully migrated to the new framework or technology. This would be the ideal scenario for our application since we would keep in sync all the parts, and we would be using a unified tool and a unique referred version.
A good and complete migration usually includes a complete re-write of the code of our app, and that makes sense. If we build an app from scratch, we have the possibility of deciding what direction we want to take with the new code. We could use a fresh point of view over our existing systems and workflow, and create a whole new app with the knowledge that we have at this moment, more complete than the one we had when we first created our web application.
But a complete rewrite has some problems. First of all, it requires a lot of time. The bigger the application is, the more time we will need to rewrite it. Another problem is the amount of work, and amount of developers, that it takes. And, if we don’t do a progressive migration, we have to think about how much time our application will be unavailable.
Normally, a complete rewrite can be accomplished with small projects, projects that don’t change frequently, or applications that are not so critical for our business.
Fast Migration
Another approach is dividing the application into parts or pieces. We migrate the app part by part, and we release those parts when they are ready. So, we have migrated parts of our application available for the users, and coexisting with our existing production app.
With this gradual migration, we deliver separated features of our project in a faster way to the users, since we don’t have to wait for the complete application to be re-written. We also get faster feedback from the users, which allows us to detect bugs or issues earlier.
But a gradual migration drives us to have different tools, libraries, dependencies, and frameworks. Or we could even have to support different versions from the same tool. This extended support could bring conflicts to our application.
We could even have problems if we are applying policies in the global scope since each one of the migrated parts could work in a different way, but being affected by the code that set global parameters to our system. An example of this is the use of a cascade logic for CSS styling.
Imagine we work with different versions of jQuery across our web application because we added functionalities from newer versions to the modules that have been created later. How complicated would it be to migrate all our app to a newer version of jQuery? Now, imagine the same scenario but migrating to a completely different framework like Next.js. That can be complicated.
Frankenstein Migration
Denys Mishunov wrote an article on Smashing Magazine presenting an alternative to these two migration ideas, trying to get the best of the previous two approaches: The Frankenstein Migration. It bases the migration process in two main components: Microservices and Web Components.
The migration process consists of a list of steps to follow:
1. Identify Microservices
Based on our app code, we should divide it into independent parts that are dedicated to one small job. If we are thinking about using React or Next.js, we could link the concept of microservices to the different components that we have.
Let’s think about a grocery list application as an example. We have a list of things to purchase, and an input to add more things to the list. So, if we want to split our app into small parts, we could think about an “item list” component and an “add item”. Doing this, we can separate the functionality and markup related to each one of those parts into different React components.
To corroborate the components are independent, we should be able to remove one of them from the app, and the other ones shouldn’t be affected by that. If we get an error when removing the markup and functionality from a service, we are not correctly not identifying the components, or we need to refactor the way our code works.
2. Allow Host-to-Alien Access
“Host” is our existing application. “Alien” is the one we will start creating, with the new framework. Both should work independently, but we should provide access from Host to Alien. We should be able to deploy any of the two applications without breaking the other one, but keeping the communication between them.
3. Write An Alien Component
Re-write a service from our Host application into our Alien application, using the new framework. The component should follow the same principle of independence that we mentioned before.
Let’s go back to the grocery list example. We identified an “add item” component. With jQuery, the markup of the component will look something like this:
<input class="new-item" />
And the JavaScript/jQuery code to add the items to the list will be something like this:
var ENTER_KEY = 13; $('.new-item').on('keyup', function (e) { var $input = $(e.target); var val = $input.val().trim(); if (e.which !== ENTER_KEY || !val) { return; } // code to add the item to the list $input.val(''); });
Instead of that, we can create an
AddItem React component:
import React from 'react' function AddItemInput({ defaultText }) { let [text, setText] = useState(defaultText) let handleSubmit = e => { e.preventDefault() if (e.which === 13) { setText(e.target.value.trim()) } } return <input type="text" value={text} onChange={(e) => setText(e.target.value)} onKeyDown={handleSubmit} /> }
4. Write Web Component Wrapper Around Alien Service
Create a wrapper component that imports our just created Alien service and renders it. The idea is to create a bridge between the Host app and the Alien app. Keep in mind that we could need a package bundler to generate JavaScript code that works in our current application since we will need to copy our new React components and make them work.
Following the grocery list example, we can create an
AddItem-wrapper.js file in the Host project. This file will contain the code that wraps our already created
AddItem component, and creates a custom element with it:
import React from "../alien/node_modules/react"; import ReactDOM from "../alien/node_modules/react-dom"; import AddItem from "../alien/src/components/AddItem"; class FrankensteinWrapper extends HTMLElement { connectedCallback() { const appWrapper = document.createElement("div"); appWrapper.classList.add("grocerylistapp"); ... ReactDOM.render( <HeaderApp />, appWrapper ); … } } customElements.define("frankenstein-add-item-wrapper", FrankensteinWrapper);
We should bring the necessary node modules and components from the Alien application folders since we need to import them to make the component work.
5. Replace Host Service With Web Component
This wrapper component will replace the one in the Host application, and we will start using it. So, the application in production will be a mix of Host components and Alien wrapped components.
In our example Host application, we should replace:
<input class="new-item" />
With
<frankenstein-add-item-wrapper></frankenstein-add-item-wrapper> ... <script type="module" src="js/AddItem-wrapper.js"></script>
6. Rinse And Repeat
Go through steps 3, 4, and 5 for each one of the identified microservices.
7. Switch To Alien
Host is now a collection of wrapper components that include all the web components we created on the Alien application. As we converted all the identified microservices, we can say that the Alien application is finished and all the services were migrated. We just need to point our users to the Alien application now.
The Frankenstein Migration method works as a combination of both the Good and the Fast approaches. We migrate the complete application but release the different components when they are done. So, they are available to be used sooner and evaluated by the users in production.
We have to consider, though, that we are doing some over-work with this approach. If we want to use the components we create for our Alien application, we have to create a wrapper component to include in the Host app. This makes us spend time developing the code for these wrapper elements. Also, by using them in our Host application, we are duplicating the inclusion of code and dependencies, and adding code that will affect the performance of our application.
Strangler Application
Another approach we can take is the Legacy Application Strangulation. We identify the edges of our existing web application, and whenever we need to add functionalities to our app we do it using a newer framework until the old system is “strangled”. This approach helps us to reduce the potential risk we can experiment while migrating an app.
To follow this approach, we need to identify different components, as we do in Frankenstein Migration. Once we divide our app into different pieces of related imperative code, we wrap them in new React components. We don’t add any additional behavior, we just create React components that render our existing content.
Let’s see an example for more clarification. Suppose we have this HTML code in our application:
<div class="accordion"> <div class="accordion-panel"> <h3 class="accordion-header">Item 1</h3> <div class="accordion-body">Text 1</div> </div> <div class="accordion-panel"> <h3 class="accordion-header">Item 2</h3> <div class="accordion-body">Text 2</div> </div> <div class="accordion-panel"> <h3 class="accordion-header">Item 3</h3> <div class="accordion-body">Text 3</div> </div>> </div>
And this JavaScript code (we already replaced jQuery functions with new JavaScript standard features).'); }); } }
This is a common implementation of an
accordion component for JavaScript. As we want to introduce React here, we need to wrap our existing code with a new React component:
function Accordions() { useEffect(() => {") }); } } }, []) return null } ReactDOM.render(<Accordions />, document.createElement("div"))
The component is not adding any new behavior or feature. We use
useEffect because the component has been mounted in the document. That is why the function returns null because the hook doesn’t need to return a component.
So, we didn’t add any new functionality to our existing app, but we introduced React without changing its behavior. From now on, whenever we add new features or changes to our code, we will do it using the newer selected framework.
Client-side Rendering, Server-side Rendering, Or Static Generation?
Next.js gives us the possibility of choosing how we want to render each page of our web application. We can use the client-side rendering that React already offers us to generate the content directly in the user’s browser. Or, we can render the content of our page in the server using server-side rendering. Finally, we can create the content of our page at build time using static generation.
In our application, we should be loading and rendering code on page load, before we start interacting with any JavaScript library or framework. We may be using a server-side rendering programming language or technology, such as ASP.NET, PHP, or Node.js. We can get advantage of Next.js features and replace our current rendering method with the Next.js server-side rendering method. Doing this, we keep all the behavior inside the same project, that works under the umbrella of our selected framework. Also, we keep the logic of our main page and the React components within the same code that generates all the needed content for our page.
Let’s think about a dashboard page as an example. We can generate all the initial markup of the page at load time, in the server, instead of having to generate it with React in the user’s web browser.
const DashboardPage = ({ user }) => { return ( <div> <h2>{user.name}</h2> // User data </div> ) } export const getServerSideProps = async ({ req, res, params }) => { return { props: { user: getUser(), }, } }, }) export default DashboardPage
If the markup that we render on page load is predictable and is based on data that we can retrieve at build time, static generation would be a good choice. Generating static assets at build time will make our application faster, more secure, scalable, and easier to maintain. And, in case we need to generate dynamic content on the pages of our app, we can use React’s client-side rendering to retrieve information from services or data sources.
Imagine we have a blog site, with many blog posts. If we use Static Generation, we can create a generic
[blog-slug].js file in our Next.js application, and adding the following code we would generate all the static pages for our blog posts at build time.
export const getStaticPaths = async () => { const blogPosts = await getBlogPosts() const paths = blogPosts.map(({ slug }) => ({ params: { slug, }, })) return { paths, fallback: false, } } export const getStaticProps = async ({ params }) => { const { slug } = params const blogPost = await getBlogPostBySlug(slug) return { props: { data: JSON.parse(JSON.stringify(blogPost)), }, } }
Create An API Using API Routes
One of the great features Next.js offers is the possibility to create API Routes. With them, we can create our own serverless functions using Node.js. We can also install NPM packages to extend the functionality. A cool thing about this is that our API will leave in the same project/app as our frontend, so we won’t have any CORS issues.
If we maintain an API that is called from our web application using jQuery AJAX functionality, we could replace them using API Routes. Doing this, we will keep all the codebase of our app in the same repository, and we will make the deployment of our application simpler. If we are using a third-party service, we can use API Routes to “mask” the external URLs.
We could have an API Route
/pages/api/get/[id].js that returns data that we use on our page.
export default async (req, res) => { const { id } = req.query try { const data = getData(id) res.status(200).json(data) } catch (e) { res.status(500).json({ error: e.message }) } }
And call it from the code of our page.
const res = await fetch(`/api/get/${id}`, { method: 'GET', }) if (res.status === 200) { // Do something } else { console.error(await res.text()) }
Deploy to Netlify
Netlify is a complete platform that can be used to automate, manage, build, test, deploy and host web applications. It has a lot of features that make modern web application development easier and faster. Some Netlify highlights are:
- Global CDN hosting platform,
- Serverless functions support,
- Deploy previews based on Github Pull Requests,
- Webhooks,
- Instant rollbacks,
- Role-based access control.
Netlify is a great platform to manage and host our Next.js applications, and it’s pretty simple to deploy a web app with it.
First of all, we need to keep track of our Next.js app code in a Git repository. Netlify connects to GitHub (or the Git platform we prefer), and whenever a change is introduced to a branch (a commit or a Pull Request), an automatic “build and deploy” task will be triggered.
Once we have a Git repository with the code of our app, we need to create a “Netlify Site” for it. To do this, we have two options:
- Using Netlify CLI
After we install the CLI (
npm install -g netlify-cli) and log into our Netlify account (
ntl login), we can go to the root directory of our application, run
ntl initand follow the steps.
- Using Netlify web app
We should go to. Connect to our Git provider, choose our application’s repository from the list, configure some build options, and deploy.
For both methods, we have to consider that our build command will be
next build and our directory to deploy is
out.
Finally, the Essential Next.js plugin is installed automatically, which will allow us to deploy and use API routes, dynamic routes, and Preview Mode. And that’s it, we have our Next.js application up and running in a fast and stable CDN hosting service.
Conclusion
In this article, we evaluated websites using jQuery library, and we compared them with new frontend frameworks like React and Next.js. We defined how we could start a migration, in case it benefits us, to a newer tool. We evaluated different migration strategies and we saw some examples of scenarios that we could migrate to Next.js web application projects. Finally, we saw how to deploy our Next.js application to Netlify and get it up and running.
Further Reading and Resources
- Frankenstein Migration: Framework-Agnostic Approach
- Removing jQuery from GitHub.com frontend
- Getting Started with Next.js
- How to Deploy Next.js Sites to Netlify
- Next.js articles in Netlify Blog
|
https://www.smashingmagazine.com/2021/07/migrate-jquery-nextjs/
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
This over eight million to one to 10.7 million to one.
The Irish Lotto takes place every Wednesday and Saturday night at the cost of €2 and offers excellent odds of winning any prize of just 1 in 29 and boasts minimum jackpots of €2 million. In addition to this, you can take the chance to win two more jackpots in the Plus 1 and Plus 2 games for just 50c more. You can simply choose 6 numbers from 1–47. You can also enter.
The draw works by drawing six numbers and an additional bonus number are drawn from a drum containing numbered balls from 1–47. Jackpots in the main game, Plus 1 and Plus 2 draws, are won when a player matches all six main numbers.
More details can be found here. The data is being scrapped using Python and Beautiful soup from the same website.
If you like to get data from last 20 years to do an analysis of your own, please visit the GitHub page.
Scraping Data for analysis of winning numbers of Irish Lotto
#import the library used to query a website import requests #specify the url for 2017 lottery = "" #Query the website and return the html to the variable 'page' page = requests.get(lottery)
page.status_code
200
A status_code of 200 means that the page downloaded successfully.
#import the Beautiful soup functions to parse the data returned from the website from bs4 import BeautifulSoup #Parse the html in the 'page' variable, and store it in Beautiful Soup format soup = BeautifulSoup(page.content, 'html.parser')
# Find the 6 lotto numbers available at the class = "Ball" ball_class = soup.find_all('td', class_='ball') #get the text using list comprehension ball = [pt.get_text() for pt in ball_class] ball type(ball)
list
#convert into a dataframe in acsending order import numpy as np import pandas as pd df_ball = pd.DataFrame(np.array(ball).reshape(-1,6),columns = list("abcdef"))
# Find the bonus number available at the class = "bonus-ball" ball_bonus_class = soup.find_all('td', class_='bonus-ball') ball_bonus = [pt.get_text() for pt in ball_bonus_class] type(ball_bonus)
list
df_ball_bonus = pd.DataFrame(ball_bonus,columns = ['Bonus'])
#join the two dataframes df_2017 = pd.concat([df_ball, df_ball_bonus], axis=1)
#find all the dates available in th date = soup.find_all('th') date1 = [pt.get_text() for pt in date]
#convert to the dtaframe df_date = pd.DataFrame(date1,columns =['Date']) #select the datframe after first two rows as they show result date and draw result df_date = df_date[2:]
#reset the index from 0 df_date = df_date.reset_index() del df_date['index']
df_date.dtypes
Date object dtype: object
#convert the date into date time df_date = df_date.apply(pd.to_datetime)
# df_date.dtypes
Date datetime64[ns] dtype: object
df1 = pd.concat([df_date, df_2017], axis=1)
#df1.to_csv('2016.csv')
Make a main file
from glob import glob with open('main_csv.csv', 'a') as singleFile: for csv in glob('*.csv'): if csv == 'main_csv.csv': pass else: for line in open(csv, 'r'): singleFile.write(line)
Analysis of winning numbers of Irish Lotto from 2nd September 2015 to 9th September 2017
df = pd.read_csv("main.csv")
df.head(2)
df.tail(2)
#Taking mean of the dataset gives the following values df.describe()
The Magic Touch
In 2009 Derren Brown hosted a live show in tandem with the UK lottery draw, where he announced to viewers that he was going to try and predict the outcome of that night’s events. Brown correctly predicted 6 numbers. His explanation, which went down like a tonne of bricks, was that he quite simply asked 24 people to predict 6 numbers, then he added up the total for each one, divided it by 24 and voila, somehow that led him to predict the lottery.
The analysis of winning numbers of Irish lotto find the most frequent winning numbers.
# # #Select the numbers only and convert to a list df_ball = df.drop(['Date','Bonus'], axis=1)
list_ball = list(df_ball.values.T.flatten())
## The maximum occurance of number in the list of numbers from collections import Counter most_common,num_most_common = Counter(list_ball).most_common(1)[0]
print(most_common) print(num_most_common)
10 38
# # find the 6 most common numbers in the list most_common_6 = Counter(list_ball).most_common(6) most_common_6
[(10, 38), (7, 35), (27, 35), (16, 34), (2, 33), (18, 32)]
number_counter = {} for number in list_ball: if number in number_counter: number_counter[number] += 1 else: number_counter[number] = 1 popular_numbers = sorted(number_counter, key = number_counter.get, reverse = True) print(popular_numbers) print(number_counter)
[10, 7, 27, 16, 2, 18, 42, 45, 6, 15, 1, 17, 40, 9, 22, 28, 29, 31, 34, 5, 25, 32, 43, 8, 20, 38, 47, 4, 19, 37, 41, 11, 12, 14, 39, 44, 46, 30, 36, 3, 24, 26, 21, 23, 13, 33, 35] {1: 30, 2: 33, 3: 22, 4: 26, 5: 28, 6: 31, 7: 35, 8: 27, 9: 29, 10: 38, 11: 25, 12: 25, 13: 18, 14: 25, 15: 31, 16: 34, 17: 30, 18: 32, 19: 26, 20: 27, 21: 20, 22: 29, 23: 20, 24: 21, 25: 28, 26: 21, 27: 35, 28: 29, 29: 29, 30: 23, 31: 29, 32: 28, 33: 18, 34: 29, 35: 17, 36: 23, 37: 26, 38: 27, 39: 25, 40: 30, 41: 26, 42: 32, 43: 28, 44: 24, 45: 32, 46: 24, 47: 27}
#for plot of the most popular numbers import matplotlib.pyplot as plt %matplotlib inline #lists = sorted(number_counter.items()) # sorted by key, return a list of tuples x, y = zip(*number_counter.items()) # unpack a list of pairs into two tuples plt.plot(x, y) plt.ylabel('times') plt.xlabel("Numbers") plt.title("Ocuurance of numbers") plt.show()
#print the dictionary with sorted values dictionary_numbers = Counter(list_ball) dictionary_numbers_sorted_keys = sorted(dictionary_numbers, key=dictionary_numbers.get, reverse=True) for r in dictionary_numbers_sorted_keys: print ("{}t: t {} times".format(r, dictionary_numbers[r]))
10 : 38 times 7 : 35 times 27 : 35 times 16 : 34 times 2 : 33 times 18 : 32 times 42 : 32 times 45 : 32 times 6 : 31 times 15 : 31 times 1 : 30 times 17 : 30 times 40 : 30 times 9 : 29 times 22 : 29 times 28 : 29 times 29 : 29 times 31 : 29 times 34 : 29 times 5 : 28 times 25 : 28 times 32 : 28 times 43 : 28 times 8 : 27 times 20 : 27 times 38 : 27 times 47 : 27 times 4 : 26 times 19 : 26 times 37 : 26 times 41 : 26 times 11 : 25 times 12 : 25 times 14 : 25 times 39 : 25 times 44 : 24 times 46 : 24 times 30 : 23 times 36 : 23 times 3 : 22 times 24 : 21 times 26 : 21 times 21 : 20 times 23 : 20 times 13 : 18 times 33 : 18 times 35 : 17 times
Picking the most commonly drawn numbers
One approach would be to choose the numbers that come up most often. At the moment the most frequently drawn ball is the number 10 which has come 38 times. The other numbers are:
- 7: 35 times
- 27: 35 times
- 16: 34 times
- 2: 33 times
- 18: 32 times
- 42: 32 times
- 45: 32 times
Their frequency of appearance is no indication that they will be drawn together. In fact, the chance of these numbers cropping up in a winning combination is the same as any other set of six.
However, you can observe certain number appear more often than another number.
#counts of unique values for the bonus to find the most occuring numbers df['Bonus'].value_counts()
35 10 13 8 9 8 31 8 3 7 5 7 16 6 17 6 19 6 7 6 29 6 47 6 37 6 39 6 44 6 40 6 41 5 6 5 28 5 34 5 22 5 20 5 4 4 14 4 45 4 43 4 18 4 46 4 21 4 27 4 10 4 38 4 30 4 8 3 23 3 12 3 15 3 32 3 1 3 24 2 25 2 33 2 42 2 2 2 11 1 36 1 Name: Bonus, dtype: int64
Hence, 35 , 13, 9,31 are the most common winning bonus numbers.
The lottery selects random numbers.
You can generate 10,000,000 numbers between ranges of 1 and 47 and calculate the mean.
import random list_random = np.random.randint(1,47, size=10000000) print(np.mean(list_random)) print(np.std(list_random))
23.5031823 13.2781247574
Mean of winning numbers from last two years
#mean np.mean(list_ball)
23.570754716981131
#standard deviation np.std(list_ball)
13.737419404066852
This shows that both randomly generated numbers and the winning numbers from last 2 years have an approximate mean of 23 and standard deviation of 13.
np.median(list_ball)
23.0
Analysis of winning numbers of the Irish Lotto
Assuming normal probability distribution you can use the Z value to calculate the number that is one standard deviation from the means.
Investigate the validity by calculating the probability of the number X.
You can select the two most winning numbers: 10 and 42. i.e P(10 <= x => 42). This probability is the area between 10 and 42 under the normal table.
We can calculate the Z-score or standard score.
The basic z score formula for a sample is: z = (x – μ) / σ
z_10 = (10 - 23) / 14 z_42 = (42-23) / 14 print("10 is {} standard deviation below the mean m = 23".format(z_10)) print("42 is {} standard deviation above the mean m = 23".format(z_42))
10 is -0.9285714285714286 standard deviation below the mean m = 23 42 is 1.3571428571428572 standard deviation above the mean m = 23
The area between 10 and 42 under a normal curve having mean m = 23 and standard deviation d = 14 equal the area between -0.92 and 1.35 under the standard normal curve.This equals the area between –0.92 and 0 plus the area between 0 and 1.35.
Z is the standard normal random variable. The table value for Z is the value of the cumulative normal distribution at z.
The normal table tells us that the area between –0.92 and 0, which equals the area between –0.92 and 0 is 0.3212. The normal table also tells that the area between 0 and 1.35, which equals the area between 0 and –1.35 is 0.4115. Hence, the probability is 0.3212 + .4118 = .7327
This probability states that 73.27 percent of all of the numbers are between 10 and 42.
Please leave any comment or queries.
This Post Has 9 Comments
Hello,
I have a csv file of 3 years lottery results
How can I use your Web Service to predict a future results ?
Thans’s in advance
Kind regards
Not so great article, difficult to follow it.
Hi. I am trying to run this script but get_text does not get any information for class_=’ball’ would you happen to know why? It works for the date and interestingly it works if i get_text() just for “td”
In
# Find the 6 lotto numbers available at the class = “Ball”
ball_class = soup.find_all(“td”, class_=”ball”)
#get the text using list comprehension
ball = [pt.get_text() for pt in ball_class]
ball
print(ball)
type(ball)
out
[]
list
Thanks in advance
Hello! Do you have forecasting for the next run based on these statistics?
Thanks for your respone
Hello Admin,
I have a problem with my code
It not similar to you
In my data, i hava 1 unexpection column that is ‘Unname’ in df_2017 such as: ,Date, a,b,c,d,e,f, Bonus.
In font of ,Date i have a column Unname, When i load df.head(2).
For example:
Unname Date a b c d e f Bonus
210 210 05/09/2015 7 9 17 20 26 27 40
211 211 02/09/2015 7 20 22 33 35 42 41
What can i do drop uname?
I am looking foward to you repone
Thanks you so much
Best regards.
del df_2017.Unname
Can you help me?
I want to predict next lotto number with python,
Can You show me your code?
Thanks you very much
Hello, I would like to receive the same information!
|
https://adataanalyst.com/data-analysis-resources/winning-numbers-irish-lotto/
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
Build a fully functional (non-violent) Squid Games Doll that plays red-light-green-light with you.
Things used in this project
Hardware components
Software apps and online services
Hand tools and fabrication machines
3D Printer (generic)
Story
Built a fully functional squid games doll. She plays the red-light-green-light game with you. Complete with rotating head, colored eyes, and she talks! She uses ultra-sonic and motion detection to determine if you win or lose. But don't worry, if you lose she just asks if you want to play again.
Watch the video and let me know what you think.
I used every pin on the Arduino UNO! Which I've never done before so this was an achievement for myself. This project took me 3 weeks to build with 1 week dedicated entirely to printing! It took me 6 days to print this doll. 1 week for the build and another week to edit the video.
ELEGOO sent me a free UNO kit if I make them a video so this is why I built the doll. It was either this or build an escape room. I'm happy they chose this project. I hope people enjoy it because it was fun build that came out looking really nice and creeps out a bunch of people. But more importantly, it works.
Here are all the parts I will use for this build.
1. Start Printing
Printing is going to take a long time. It took me 6 days to print the entire doll out. I also used different color filament so that I can reduce the amount of painting.
I remixed a model I found on thingiverse.com, hollowed out the center, and added access holes for the electronics. I also modified the chest plate for the Servo and Ultra Sonic to be mounted.
2. Nobody like painting
Time to paint. I used generic spray paint for this. I painted the inside of the dolls head (masked off the eyes) so that the LEDs for the eyes will not make the entire face glow. Although this might be the effect you are looking for. I wanted just the eyes to glow.
3. Magnets attract but glue sticks
One way to attach all of the doll's limbs is to melt magnets into the plastic. This is if you want to be able to take her apart. If I were to do this project again I would probably just glue all the limbs on her. As I see it now, there is little advantage to use magnets aside from she can fit into a smaller box for storage if you want. Only thing you should not attach is the head at this point.
4. Easy on the eyes
Start with the easiest step, the eyes. I used tri-color LEDs for the eyes. As you know, you can mix and match RGB colors to get basic any color you'd like. I stuck with primary and secondary colors so I didn't have to PWM the signals. But you can if you are looking for that.
The longest pin is the ground, that will be pin 2.
Connect the LED as pictured using 220ohm resistors for each lead aside from the ground.
For mounting, I simply hot glued the LEDs as close to the center of the eyes as I could get but on the reverse side. Be sure to long enough wire to pass down the neck and into the lower part of her body.
5. LCD Menu
The next easiest component is the 16x2 LCD screen. You should use the LCD screen with an I2C adapter. It will make your life much easier and reduces the IO count from 6 to 2. Once this is connected, the LCD should startup with "Welcome to the squid games!" on the display.
For mounting, I printed out a 1mm thick circle. I make this thin so that I can mold it to the dolls back with a heat gun. This is much easier than figuring out the contours of her back ( at least for me ). I installed threaded inserts for the display with nuts on the revers side to secure the display and the display mount to the body.
6. Only owl heads rotate 180 degrees
The servo was difficult for one main reason, I don't use the servo library. I know that sounds weird but I had to use the timer1 for the 4 digit display update and the servo library also uses this. Luckily, the servo is either 0 degrees or 180 degrees and there is no in between making this a lot easier.
Timer1 is setup for.5ms intervals, 2000hz. The servo period is 20ms. At 0 degrees the pin only need to be high for 2 counts and low the rest of the period. For 180 degrees the pin needs to be high for 4 counts and low the rest of the time.
There is a nice mount on the chest plate for the servo. You can screw it into place or glue it into place. I used epoxy to secure the servo to the chest plate because it will also add strength to the chest plate and hopefully prevent it from damage.
7. Sounds like a bat
Next we will install the ultra sonic distance module. I have this updating every 250ms. It also has a nice mounting location on the chest plate. There are only 2 wires for this module.
I used epoxy to mount the ultra sonic to the chest plate.
8. No strings attached
The IR sensor for the remote is only needed if you want to control the game play. I thought this would be fun but don't really use this mode, automatic game play is fun enough.
I chose to mount the IR sensor inside a clip on the doll's hair. You can obviously choose to place it somewhere else. I was trying to hid it but maybe there is a better place because the IR doesn't always see the remote when she turns her head and the sensor is on the other side.
9. Time to Time
Next we will setup the timer display. This is a lot of work for a 4 digit display. I will include the connection diagram from ELEGOO. The game play is only up to 5 minutes so I also removed the use of the most significant digit. But you an decide to keep it if you have the IO pin available. To update the display you have to cycle the LED very quickly because you can only have one digit active at a time. This is why they seem to flicker when watched through a camera. I used a 2ms refresh rate which is fast enough that you can not see the flicker. At 5ms I can start to see it flicker when looking at the display in your peripheral vision. In addition, you will need the shift register 74HC595.
Mounting the display what not fun. I decided it was best to integrate the display into her belt. The original doll in Squid Games does not have a belt of course, but sacrifices had to be made to get this display on her. If you choose this route too, mask off a square the same size of the display then cut out with a Dremel. I then used epoxy putty to add a gradual transition to the display. But this was not needed, I just thought it looked better this way.
I mounted the 74HC595 to the prototype shield, otherwise you will have wires going all over the place. An alternative solution is to use a different timer display that has a more convenient communication with less pins.
10. I saw you move
The motion detector is a weird little guy. This thing uses infrared to detect movement. One thing I learned is that this sensor needs time to warm up. On startup it needs 1 minute to warm up. That is why there is a 1 minute startup time for the doll. Another annoyance with this module is that the fastest it can update a movement detection is about 5 seconds. The last annoyance is how sensitive this sensor is. Even with the sensitivity turned all the way down, it still can see the smallest of movements and sometimes movement that I don't even know what it is talking about. To help prevent these "false positives" I mounted the sensor inside a horse blinder box. The box has a small hole (7mm) for the motion detector to look out. As a bonus, this prevents you from having to mount this giant sensor on the outside of the doll. The motion sensor only has one binary wire for feedback, motion or not.
To mount the sensor, I printed out the horse blinder and glued it to the inside of the doll. I then drilled a hole through the body. I used threaded insert on the blinder box to secure the motion sensor.
11. Don't push my buttons
Finally, we are at the buttons. If you have the extra I/O pins, it is easier to connect each of these to a digital input. But I did not have this luxury for the UNO. Instead I had to use an analog input to read the resistor values to determine which button was being pressed. The values I used were, 1K, 2K, and 5K. Then I had a 220 Ohm resistor to pull the analog input low. Otherwise it will float and get random button presses.
I mounted the buttons on the same mounting plate as the LCD. This was not easy but I didn't have a better way. Soldering the wires onto these buttons then getting them to pass through little holes drilled in the plastic will test your patients.
12. Can you hear me now?
Last step and probably the most important is the sound module. This will use the serial port on the UNO so you must add 1K Ohm resistors to the Tx and Rx pins otherwise, you will get blocked from programming the UNO after this connection is made. In addition, you will need to use the "busy" pin so that the UNO knows that a sounds is already playing. This is very important if you have MP3s play back-to-back.
I mounted the MP3 player module on the prototype shield. This shield makes mounting components like this very convenient because it then just plugs into the UNO. This module will need an 8ohm speaker and has an output of 3W. The speaker was just glued down to the base of the doll. I drilled small holes under the speaker for the sound to come out better.
13. Mount the UNO
Install the UNO onto the platform and plug the prototype shield onto the UNO. Be sure that you have labeled all of the wires, if not you probably don't know where any of them go by now. With a little bit of negotiation, you can get the mounted UNO inside the doll with all the wires connected.
I used threaded inserts to mount the platform to the bottom of the doll.
14. Test Fix Test
This is when you get to put your debugging hat on. I can tell you the software is working on GitHub so at least that is one less thing to debug. But go ahead anyway if you have doubts and send me any updates you find.
15. Let's play
Time to test her out and play a game. Here is how the game is programmed.
On startup she turns her head forward.
The motion sensor take a full minute to startup. So there is a timer when it starts. Half way through she giggles and turns her head around. Then announces when she is ready.
Depending on if you have the game set to remote she says different things. In Auto mode she asks you to press the play button. In my case, this is the far right button. In remote mode she will ask you to press the power button when you are ready. Then press the play button to toggle to red light or green light.
So when you are ready, press the go button and she will give you 10 seconds to get in place. Usually someone else nearby will press this button.
Then the game begins. She will start with Green light. For green light you have to get within 50cm to trigger a win. If you are within 100cm she will say indicate that you are getting closer. Green light is only using the sonar.
For red light the motion sensor and the distance sensor is being used. If you move enough for the motion sensor to trip or if you move more than 10cm forward, you will loose the game. You will also loose the game if time runs out. She will remind you that time is almost out at 5 seconds left.
The last cool feature is that she will also speak in the Korean voice for the red light. This is a menu feature. Press the far left button to toggle the menu item, and the center button to toggle the item options.
16. Watch Video
This video took me a long time to edit. I have probably 30 hours in just editing. But it was fun making it. I think it came out good and is funny but want you to see for yourself. Please let me know what you think and if you have any questions.
Thank You!
Schematics
Wire diagram
This is how I connected all of the components to the UNO.
The project repo
All of the files for this build are stored here.
Code
Squid Game Doll Sketch
C/C++
This will control all of the sensor and the game logic
/// CodeMakesItGo Dec 2021 #include <DFPlayerMini_Fast.h> #include <FireTimer.h> #include <IRremote.h> #include <LiquidCrystal_I2C.h> #include <SoftwareSerial.h> #include <SR04.h> #include <Wire.h> /*-----( Analog Pins )-----*/ #define BUTTONS_IN A0 #define SONAR_TRIG_PIN A1 #define SONAR_ECHO_PIN A2 #define MOTION_IN A3 /*-----( Digital Pins )-----*/ #define LED_BLUE 13 #define LED_GREEN 12 #define LED_RED 11 #define SEGMENT_DATA 10 // DS #define SEGMENT_CLOCK 9 // SHCP #define SEGMENT_LATCH 8 // STCP #define SEGMENT_1_OUT 7 #define SEGMENT_2_OUT 6 #define SEGMENT_3_OUT 5 #define IR_DIGITAL_IN 4 // IR Remote #define SERVO_OUT 3 #define DFPLAYER_BUSY_IN 2 /*-----( Configuration )-----*/ #define TIMER_FREQUENCY 2000 #define TIMER_MATCH (int)(((16E+6) / (TIMER_FREQUENCY * 64.0)) - 1) #define TIMER_2MS ((TIMER_FREQUENCY / 1000) * 2) #define VOLUME 30 // 0-30 #define BETTER_HURRY_S 5 // play clip at 5 seconds left #define WIN_PROXIMITY_CM 50 // cm distance for winner #define CLOSE_PROXIMITY_CM 100 // cm distance for close to winning #define GREEN_LIGHT_MS 3000 // 3 seconds on for green light #define RED_LIGHT_MS 5000 // 5 seconds on for green light #define WAIT_FOR_STOP_MOTION_MS 5000 // 5 seconds to wait for motion detection to stop /*-----( Global Variables )-----*/ static unsigned int timer_1000ms = 0; static unsigned int timer_2ms = 0; static unsigned char digit = 0; // digit for 4 segment display static int countDown = 60; // Start 1 minute countdown on startup static const int sonarVariance = 10; // detect movement if greater than this static bool gameInPlay = false; static bool faceTree = false; static bool remotePlay = false; // 0 , 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F, NULL const unsigned char numbers[] = x00}; const char *MenuItems[] = {"Language", "Play Time", "Play Type"}; typedef enum { LANGUAGE, PLAYTIME, PLAYTYPE, MENUITEM_COUNT } MenuItemTypes; const char *Languages[] = {"English", "Korean"}; typedef enum { ENGLISH, KOREAN, LANUAGE_COUNT } LanguageTypes; static int language = 0; const char *PlayTime[] = {"300", "240", "180", "120", "60", "30", "15"}; typedef enum { PT300, PT240, PT180, PT120, PT60, PT30, PT15, PLAYTIME_COUNT } PlayTimeTypes; const int playTimes[] = {300, 240, 180, 120, 60, 30, 15}; static int playTime = 0; const char *PlayType[] = {"Auto", "Remote"}; typedef enum { AUTO, REMOTE, PLAYTYPE_COUNT } PlayTypeTypes; static int playType = 0; typedef enum { BLACK, RED, GREEN, BLUE, WHITE, YELLOW, PURPLE } EyeColors; EyeColors eyeColor = BLACK; typedef enum { WARMUP, WAIT, READY, GREENLIGHT, REDLIGHT, WIN, LOSE } GameStates; static GameStates gameState = WARMUP; /*-----( Class Objects )-----*/ FireTimer task_50ms; FireTimer task_250ms; DFPlayerMini_Fast dfPlayer; SR04 sonar = SR04(SONAR_ECHO_PIN, SONAR_TRIG_PIN); IRrecv irRecv(IR_DIGITAL_IN); decode_results irResults; LiquidCrystal_I2C lcdDisplay(0x27, 16, 2); // 16x2 LCD display /*-----( Functions )-----*/ void translateIR() // takes action based on IR code received { switch (irResults.value) { case 0xFFA25D: Serial.println("POWER"); if (gameState == WAIT) { gameInPlay = true; } break; case 0xFFE21D: Serial.println("FUNC/STOP"); break; case 0xFF629D: Serial.println("VOL+"); break; case 0xFF22DD: Serial.println("FAST BACK"); break; case 0xFF02FD: Serial.println("PAUSE"); remotePlay = !remotePlay; break; case 0xFFC23D: Serial.println("FAST FORWARD"); break; case 0xFFE01F: Serial.println("DOWN"); break; case 0xFFA857: Serial.println("VOL-"); break; case 0xFF906F: Serial.println("UP"); break; case 0xFF9867: Serial.println("EQ"); break; case 0xFFB04F: Serial.println("ST/REPT"); break; case 0xFF6897: Serial.println("0");; case 0xFFFFFFFF: Serial.println(" REPEAT"); break; default: Serial.println(" other button "); } } bool isPlayingSound() { return (digitalRead(DFPLAYER_BUSY_IN) == LOW); } void updateTimeDisplay(unsigned char digit, unsigned char num) { digitalWrite(SEGMENT_LATCH, LOW); shiftOut(SEGMENT_DATA, SEGMENT_CLOCK, MSBFIRST, numbers[num]); // Active LOW digitalWrite(SEGMENT_1_OUT, digit == 1 ? LOW : HIGH); digitalWrite(SEGMENT_2_OUT, digit == 2 ? LOW : HIGH); digitalWrite(SEGMENT_3_OUT, digit == 3 ? LOW : HIGH); digitalWrite(SEGMENT_LATCH, HIGH); } void updateServoPosition() { static int servoPulseCount = 0; static bool lastPosition = false; // Only get new value at start of period if (servoPulseCount == 0) lastPosition = faceTree; if (!lastPosition) // 180 degrees { digitalWrite(SERVO_OUT, servoPulseCount < 5 ? HIGH : LOW); } else // 0 degrees { digitalWrite(SERVO_OUT, servoPulseCount < 1 ? HIGH : LOW); } servoPulseCount = (servoPulseCount + 1) % 40; // 20ms period } void updateMenuDisplay(const int button) { static int menuItem = 0; static int menuOption = 0; switch (button) { case 1: menuItem = (menuItem + 1) % MENUITEM_COUNT; if (menuItem == LANGUAGE) { menuOption = language; } else if (menuItem == PLAYTIME) { menuOption = playTime; } else if (menuItem == PLAYTYPE) { menuOption = playType; } else { menuOption = 0; } break; case 2: if (menuItem == LANGUAGE) { menuOption = (menuOption + 1) % LANUAGE_COUNT; language = menuOption; } else if (menuItem == PLAYTIME) { menuOption = (menuOption + 1) % PLAYTIME_COUNT; playTime = menuOption; } else if (menuItem == PLAYTYPE) { menuOption = (menuOption + 1) % PLAYTYPE_COUNT; playType = menuOption; } else { menuOption = 0; } break; case 3: if (gameState == WAIT) { gameInPlay = true; } if (gameState == GREENLIGHT || gameState == REDLIGHT) { gameInPlay = false; } default: break; } if (menuOption != -1) { lcdDisplay.clear(); lcdDisplay.setCursor(0, 0); lcdDisplay.print(MenuItems[menuItem]); lcdDisplay.setCursor(0, 1); if (menuItem == LANGUAGE) { lcdDisplay.print(Languages[menuOption]); } else if (menuItem == PLAYTIME) { lcdDisplay.print(PlayTime[menuOption]); } else if (menuItem == PLAYTYPE) { lcdDisplay.print(PlayType[menuOption]); } else { lcdDisplay.print("unknown option"); } } else { menuItem = 0; menuOption = 0; } } void handleButtons() { static int buttonPressed = 0; int value = analogRead(BUTTONS_IN); if (value < 600) // buttons released { if (buttonPressed != 0) updateMenuDisplay(buttonPressed); buttonPressed = 0; return; } else if (value < 700) { Serial.println("button 1"); buttonPressed = 1; } else if (value < 900) { Serial.println("button 2"); buttonPressed = 2; } else if (value < 1000) { Serial.println("button 3"); buttonPressed = 3; } else { Serial.println(value); buttonPressed = 0; } } static int lastSonarValue = 0; void handleSonar() { int value = sonar.Distance(); if (value > lastSonarValue + sonarVariance || value < lastSonarValue - sonarVariance) { Serial.println(value); lastSonarValue = value; } } static int lastMotion = 0; void handleMotion() { int value = digitalRead(MOTION_IN); if (value != lastMotion) { lastMotion = value; } if (lastMotion) Serial.println("Motion Detected"); } void handleLeds() { digitalWrite(LED_RED, eyeColor == RED || eyeColor == WHITE || eyeColor == PURPLE || eyeColor == YELLOW ? HIGH : LOW); digitalWrite(LED_GREEN, eyeColor == GREEN || eyeColor == WHITE || eyeColor == YELLOW ? HIGH : LOW); digitalWrite(LED_BLUE, eyeColor == BLUE || eyeColor == WHITE || eyeColor == PURPLE ? HIGH : LOW); } void handleRemote() { // have we received an IR signal? if (irRecv.decode(&irResults)) { translateIR(); irRecv.resume(); // receive the next value } } // Timer 1 ISR ISR(TIMER1_COMPA_vect) { // Allow this ISR to be interrupted sei(); updateServoPosition(); if (timer_1000ms++ == TIMER_FREQUENCY) { timer_1000ms = 0; countDown--; if (countDown < 0) { countDown = 0; } } if (timer_2ms++ == TIMER_2MS) { timer_2ms = 0; if (digit == 0) updateTimeDisplay(1, countDown % 10); if (digit == 1) updateTimeDisplay(2, (countDown / 10) % 10); if (digit == 2) updateTimeDisplay(3, (countDown / 100) % 10); if (digit == 3) updateTimeDisplay(4, 16); digit = ((digit + 1) % 4); } } void playGame() { static int sequence = 0; static long internalTimer = millis(); static bool closerClipPlayed = false; static bool hurryUpClipPlayed = false; static int captureDistance = 0; long currentTimer = internalTimer; if(isPlayingSound()) return; if (gameState == WARMUP) { // power up sound if (sequence == 0) { Serial.println("Warming Up"); dfPlayer.playFolder(1, 1); faceTree = false; eyeColor = YELLOW; sequence++; } // laugh at 30 else if (sequence == 1 && countDown <= 30) { Serial.println("Laughing"); dfPlayer.playFolder(1, 2); faceTree = true; sequence++; } else if (sequence == 2 && countDown <= 10) { Serial.println("Almost ready"); dfPlayer.playFolder(1, 3); sequence++; } else if (sequence == 3 && countDown == 0) { Serial.println("All ready, lets play"); dfPlayer.playFolder(1, 4); faceTree = false; sequence = 0; gameState = WAIT; gameInPlay = false; } } else if (gameState == WAIT) { currentTimer = millis(); if (gameInPlay) { gameState = READY; remotePlay = false; sequence = 0; } // Every 30 seconds else if (currentTimer - internalTimer > 30000 || sequence == 0) { internalTimer = millis(); if(playType == AUTO) { // press the go button when you are ready Serial.println("Press the go button when you are ready"); dfPlayer.playFolder(1, 5); } else { Serial.println("Press the power button on the remote when you are ready"); dfPlayer.playFolder(1, 6); } // eyes are blue eyeColor = BLUE; // facing players faceTree = false; gameInPlay = false; sequence++; } } else if (gameState == READY) { currentTimer = millis(); if (sequence == 0) { // get in position, game will start in 10 seconds Serial.println("Get in position."); dfPlayer.playFolder(1, 7); countDown = 10; // eyes are green eyeColor = WHITE; // facing players faceTree = false; sequence++; internalTimer = millis(); } else if (sequence == 1) { if (playType == REMOTE) { if (remotePlay) sequence++; } else sequence++; } else if (sequence == 2) { // at 0 seconds, here we go! if (countDown == 0) { countDown = playTimes[playTime]; Serial.print("play time set to "); Serial.println(countDown); Serial.println("Here we go!"); dfPlayer.playFolder(1, 8); gameState = GREENLIGHT; sequence = 0; } } } else if (gameState == GREENLIGHT) { currentTimer = millis(); if (sequence == 0) { // eyes are green eyeColor = GREEN; // play green light Serial.println("Green Light!"); dfPlayer.playFolder(1, 9); sequence++; } else if(sequence == 1) { // play motor sound dfPlayer.playFolder(1, 19); // facing tree faceTree = true; sequence++; internalTimer = millis(); } else if (sequence == 2) { // wait 3 seconds or until remote // switch to red light if (playType == AUTO && currentTimer - internalTimer > GREEN_LIGHT_MS) { sequence = 0; gameState = REDLIGHT; } else if (playType == REMOTE && remotePlay == false) { sequence = 0; gameState = REDLIGHT; } else { // look for winner button or distance if (gameInPlay == false || lastSonarValue < WIN_PROXIMITY_CM) { sequence = 0; gameState = WIN; } else if (countDown <= 0) { Serial.println("Out of Time"); dfPlayer.playFolder(1, 16); sequence = 0; gameState = LOSE; } // at 2 meters play "your getting closer" else if (lastSonarValue < CLOSE_PROXIMITY_CM && closerClipPlayed == false) { Serial.println("Getting closer!"); dfPlayer.playFolder(1, 11); closerClipPlayed = true; } // if less than 5 seconds play better hurry else if (countDown <= BETTER_HURRY_S && hurryUpClipPlayed == false) { Serial.println("Better Hurry"); dfPlayer.playFolder(1, 12); hurryUpClipPlayed = true; } } } } else if (gameState == REDLIGHT) { currentTimer = millis(); if (sequence == 0) { // eyes are red eyeColor = RED; Serial.println("Red Light!"); if(language == ENGLISH) { // play red light English dfPlayer.playFolder(1, 10); } else { // play red light Korean dfPlayer.playFolder(1, 18); } sequence++; } else if(sequence == 1) { // play motor sound dfPlayer.playFolder(1, 19); // facing players faceTree = false; // Save current distance captureDistance = lastSonarValue; sequence++; internalTimer = millis(); } else if (sequence == 2) { //wait for motion to settle if (lastMotion == 0 || (currentTimer - internalTimer) > WAIT_FOR_STOP_MOTION_MS) { internalTimer = millis(); sequence++; Serial.println("Done settling"); } Serial.println("Waiting to settle"); } else if (sequence == 3) { // back to green after 5 seconds if (playType == AUTO && currentTimer - internalTimer > RED_LIGHT_MS) { sequence = 0; gameState = GREENLIGHT; } else if (playType == REMOTE && remotePlay == true) { sequence = 0; gameState = GREENLIGHT; } else { // can't push the button while red light // detect movement // detect distance change if (gameInPlay == false || lastMotion == 1 || lastSonarValue < captureDistance) { Serial.println("Movement detected!"); dfPlayer.playFolder(1, 15); sequence = 0; gameState = LOSE; } if (countDown == 0) { Serial.println("Out of time"); dfPlayer.playFolder(1, 16); sequence = 0; gameState = LOSE; } } } } else if (gameState == WIN) { if (sequence == 0) { // play winner sound Serial.println("You Won!"); dfPlayer.playFolder(1, 13); // eyes are white eyeColor = WHITE; // facing players faceTree = false; sequence++; } else if (sequence == 1) { // wanna play again? Serial.println("Play Again?"); dfPlayer.playFolder(1, 17); gameInPlay = false; countDown = 0; // go to wait gameState = WAIT; sequence = 0; } } else if (gameState == LOSE) { if (sequence == 0) { // sorry better luck next time Serial.println("Sorry, you lost"); dfPlayer.playFolder(1, 14); // eyes are purple eyeColor = PURPLE; // face players faceTree = false; sequence++; } else if (sequence == 1) { // wanna play again? Serial.println("Play Again?"); dfPlayer.playFolder(1, 17); gameInPlay = false; countDown = 0; // go to wait gameState = WAIT; sequence = 0; } } else { //Shouldn't ever get here gameState = WARMUP; } } void loop() /*----( LOOP: RUNS CONSTANTLY )----*/ { if (task_50ms.fire()) { handleRemote(); handleButtons(); } if (task_250ms.fire()) { handleSonar(); handleMotion(); handleLeds(); playGame(); Serial.println(isPlayingSound()); } } // Setup Timer 1 for 2000Hz void setupTimer() { cli(); //stop interrupts TCCR1A = 0; // set entire TCCR1A register to 0 TCCR1B = 0; // same for TCCR1B TCNT1 = 0; //initialize counter value to 0 // set compare match register OCR1A = TIMER_MATCH; // = (16*10^6) / (2000*64) - 1 (must be <65536), 2000Hz // turn on CTC mode TCCR1B |= (1 << WGM12); // Set CS11 and CS10 bits for 64 prescaler TCCR1B |= (1 << CS11) | (1 << CS10); // enable timer compare interrupt TIMSK1 |= (1 << OCIE1A); sei(); //allow interrupts } void setup() { Serial.begin(9600); pinMode(MOTION_IN, INPUT); pinMode(BUTTONS_IN, INPUT); pinMode(DFPLAYER_BUSY_IN, INPUT); pinMode(SERVO_OUT, OUTPUT); pinMode(LED_RED, OUTPUT); pinMode(LED_GREEN, OUTPUT); pinMode(LED_BLUE, OUTPUT); pinMode(SEGMENT_LATCH, OUTPUT); pinMode(SEGMENT_CLOCK, OUTPUT); pinMode(SEGMENT_DATA, OUTPUT); pinMode(SEGMENT_1_OUT, OUTPUT); pinMode(SEGMENT_2_OUT, OUTPUT); pinMode(SEGMENT_3_OUT, OUTPUT); irRecv.enableIRIn(); // Start the receiver dfPlayer.begin(Serial); // Use the standard serial stream for DfPlayer dfPlayer.volume(VOLUME); // Set the DfPlay volume lcdDisplay.init(); // initialize the lcd lcdDisplay.backlight(); // Turn on backlight setupTimer(); // Start the high resolution timer ISR // Display welcome message lcdDisplay.setCursor(0, 0); lcdDisplay.print("Welcome to the"); lcdDisplay.setCursor(0, 1); lcdDisplay.print("Squid Games!"); // short delay to display welcome screen delay(1000); task_50ms.begin(50); // Start the 50ms timer task task_250ms.begin(250); // Start the 250ms timer task }
The article was first published in hackster, December 7, 2021
cr:
author: codemakesitgo
|
https://community.dfrobot.com/makelog-312148.html
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
Is there any library that I can load to use all Civil 3D commands to use in Python Script?
Like these commands (Just examples):
Is there any library that I can load to use all Civil 3D commands to use in Python Script?
Like these commands (Just examples):
Would be cool to be able to string together civil 3d commands as python in dynamo. Arcgis pro has a feature where you can conduct an action and then copy a code version from a history window into a python window. Super useful. I’d love something similar for civil 3d.
This will send a command to the command line.
import System from System import * input = IN[0] app = System.Runtime.InteropServices.Marshal.GetActiveObject("Autocad.Application") adoc = app.ActiveDocument adoc.SendCommand(input) OUT = str(input) + " sent to command line"
Thanks Zachri! You’re the man! This will be super useful!
Hi @mzjensen i am trying to use the code with a simple example that is to copy but it does not work for me or layers I am doing something wrong, a query can be linked to several commands, you could put an example, thanks
@Christhian put a space or a “\n” character at the end so you are sending a way to trigger the command. What you have shown right now would be the equivalent of typing “Copy” into the command line but then not pressing spacebar or enter.
OUT = str(input) + " sent to command line" + " "
Ah OK, it’s a Python engine issue. Try this instead.
from Autodesk.AutoCAD.ApplicationServices import * input = IN[0] adoc = Application.DocumentManager.MdiActiveDocument adoc.SendStringToExecute(input, True, False, False) OUT = input + " sent to command line"
It still does not work, it will have something to do with it being civil 2022?
If you’ve already run the graph once and didn’t change any of the inputs, then the node won’t run again. You could add a simple toggle to run it again even if the input string didn’t change.
from Autodesk.AutoCAD.ApplicationServices import * input = IN[0] toggle = IN[1] adoc = Application.DocumentManager.MdiActiveDocument adoc.SendStringToExecute(input, True, False, False) OUT = input + " sent to command line"
@mzjensen
If it works but it does not give me direct to select the object, if I do not have to click and hit enter so that it just works, I do not know if it will be for dynamo or some configuration that I must perform in civil 3d
An additional question that I have is that I could copy and make the copy rotate 45 degrees automatically, sorry if without leaving the first question I consult another.
In the example, you are adding a space to the output (the OUT variable), which is not what is actually sent to the command line. The
adoc.SendStringToExecute() method is what is actually doing the work. You’ll notice in the recording I attached earlier that I put a space in the input string node, so it says "Copy " (notice the space). If you want to add it directly to the code, it needs to be concatenated with the input string, not the output. So either
input + " " or
input + "\n".
Try the Object.Copy and Object.Transform nodes. If that doesn’t work, it’s probably good to open a new thread like you mentioned.
@andre.demski @Christhian @WrightEngineering
FYI Camber v2.0.0 has a node to send commands to the command line.
Nice!!
Great
It’s a really great advantage with this node, @mzjensen
I wonder, is there a way to have more than one command in the line. I have tried but haven’t succeed yet.
And sometimes I realize that a command will not be runned again from Dynamo, do I need to insert some kind of trigger to “update” the script or should I run it from the Dynamo Player instead?
Thanks for all your hard work and all your effort!!
Regards,
Patrick
One way is to separate the commands with a space or \n character to chain them together.
|
https://forum.dynamobim.com/t/how-to-use-civil-3d-commands-in-a-python-script/65779
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
import java.io.*; public class OutputStreamString { public static void main(String[] args) throws IOException { ByteArrayOutputStream stream = new ByteArrayOutputStream(); String line = "Hello there!"; stream.write(line.getBytes()); String finalString = new String(stream.toByteArray()); System.out.
It takes a lot of effort and cost to maintain Programiz. We would be grateful if you support us by either:
Disabling AdBlock on Programiz. We do not use intrusive ads.
orDonate on Paypal
|
https://www.programiz.com/java-programming/examples/convert-outputstream-string
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
Anatomy of a Lucene Tokenizer
Anatomy of a Lucene Tokenizer
Join the DZone community and get the full member experience.Join For Free
How to Simplify Apache Kafka. Get eBook..
public class MyCustomTokenizer extends Tokenizer { private final CharTermAttribute termAtt = addAttribute(CharTermAttribute.class);
Here, a CharTermAttribute is added to the superclass. A CharTermAttribute stored the term text.
Here's one way to set the value of the term text in incrementToken().
public boolean!
12 Best Practices for Modern Data Ingestion. Download White Paper.
Published at DZone with permission of Kelvin Tan . See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/anatomy-lucene-tokenizer
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
Active
X plugin (npmozax .dll) should allow Javascript/scripting through Live Connect and XPConnect
RESOLVED FIXED in Future
Status
People
(Reporter: Paul.Oswald, Assigned: adamlock)
Tracking
Firefox Tracking Flags
(Not tracked)
Attachments
(5 attachments, 4 obsolete attachments)
I have extended Adam Lock's plugin to work with JavaScript through LiveConnect. This allows the embedded ActiveX control to be scriptable from 4.x browsers. We also have a preliminary XPConnect version coming along. I took Adam Lock's lead and implemented the Java class such that you can call: var = pluginName.GetProperty("ActiveXPropertyName"); pluginName.Invoke("ActiveXMethodName"); alert(pluginName.npmozaxVersion()); In the first case it will return the associated property. The last line is one I use to keep track of each build, implemented in the Java class file. I guess it is strictly not necessary but might be useful if the way we call things changes. Speaking of which... I would like to bring up an issue: Is it possible to have the Java class load up a run-time instance with an interface that matches the ActiveX control? Can we do this in XPConnect? This way all sites that use ActiveX will work seamlessly since we would be doing it the same way as IE: var = pluginName.ActiveXPropertyName; pluginName.ActiveXMethodName(); alert(pluginName.npmozaxVersion());
Created attachment 71330 [details] [diff] [review] LiveConnect patch for npmozax.dll Here's the patch. It does not support overloaded Invokes() yet because I had some trouble getting it to compile. I was also useing a different Java SDK than Adam was so you will probably have to change that line in makefile.win This patch was made against 0.9.8 but I think I may have included the patch from when I made it. Just run this in mozilla/embedding/browser/activex/src/plugin. I am not great at the whole patching/diff procedure and I get cvs through my firewall so if I did it wrong, please let me know what I need to do to fix it.
Oops! I meant to say that I included the patch from bug 126492 above (not 46569). Also I wanted to mention that I included a .dsw and a .dsp file for Visual C++ but you might have to change the program and dll locations in the settings before building. I also changed the makefile to drop the dll directly into the plugins folder so that I could debug by pressing F5 from VC.
Nice patch, but Mozilla does not support the old Liveconnect way of scripting plugins. However, I think there may be an XPConnect bridge in the works. -->over to Adam Lock for review
Assignee: av → adamlock
Status: UNCONFIRMED → NEW
Component: Plug-ins → Embedding: ActiveX Wrapper
Ever confirmed: true
Keywords: patch
QA Contact: shrir → cpratt
I understand that the mozilla browser does not support LiveConnect. No code in the browser has been changed at all. Furthermore, if you comment out the LiveConnect variable in Makefile.win you will get the plugin without any of the code in the patch. I think that optionally allowing the plugin to support Netscape 4.x (LiveConnect) as well as Mozilla (XPConnect) adds value..
On or around line 386 of LegacyPlugin.cpp I put the following in NPP_Destroy() to keep the plugin from calling into a non-initialized pointer: #ifndef MOZ_ACTIVEX_PLUGIN_LIVECONNECT pData->pScriptingPeer->Release(); #endif /* MOZ_ACTIVEX_PLUGIN_LIVECONNECT */ Now that I'm looking into the XPConnect stuff I see that this is a bug in the XPConnect version as well. The real fix should be to set the pointer pData->pScriptingPeer to NULL when it gets created by new up in NPP_New(). This way it gets detected as NULL by NPP_GetValue() and gets initialized. The end of NPP_New() should be: NPError NP_LOADDS NPP_New(NPMIMEType pluginType, NPP instance, uint16 mode, int16 argc, char* argn[], char* argv[], NPSavedData* saved) { . . . pData->pScriptingPeer = NULL; // <-- this is the line to add. instance->pdata = pData; return NPERR_NO_ERROR; } That should fix the dialogs that pop up in XPConnect versions and allow us to remove the #ifndef MOZ_ACTIVEX_PLUGIN_LIVECONNECT from NPP_Destroy() for 4.x versions as well.
In comment #4, Paul.Oswald@medec.com wrote: . Yes, the interface a scriptable plugin presents to XPConnect is definable at runtime. If different interfaces are used, the multiple type libraries will be necessary for all of the possible interfaces.
Hmm, perhaps definable is too strong a word, I really meant to say "selectable". I don't know if we have any support for constructing type libraries at runtime.
An XPConnect bridge that allows the JS to call arbitrary methods on the plugin would be nice. Internally the plugin would receive the call and args and pass them to the real control for processing via IDispatch.
I have Invoke() working with XPConnect->ActiveX. Currently, I am working on passing parameters back to JavaScript. (I am currently trying to figure out if I can return a generic object to Javascript. In the LiveConnect version I was returning a java_lang_Object* from the native methods. I am looking for the equivalent idea in JavaScript.) Once that is done, GetProperty() should work as well. If you would like to see what I have so far, I could post another patch. Otherwise, I expect to have this done by the end of the week. I would love to be able to construct an interface at runtime so that we could call the functions directly without passing them as parameters to Invoke(), but I just don't see how it is possible with XPConnect in its current form. Does anyone see a way? For my purposes, calling name.Invoke("function") is sufficient, since I can just write out my own Javascript to call it. If anyone wants npmozax.dll to be scriptable with existing IE content, then XPConnect would have to support calling: name.function() I don't even know how common scripted ActiveX controls are on the web. The problem seems to come down to the mechanism of typelibs. In Ms COM, typelibs are used to create VTable interfaces which are faster but can only be used at build time. You can also have run-time IDispatch interfaces which are what the Scripting languages use. A COM object can also be defined to have dual interfaces.
Created attachment 71733 [details] [diff] [review] Support for ActiveX invoke() and getProperty() through XPConnect This patch supports invoke() and getProperty() through LiveConnect -or- XPConnect. I built it against a snapshot I downloaded today so it should work ok. I took out the dsw and dsp files since they were getting messed up. Also, I added ifdefs to the makefile.win and Legacyplugin.cpp so you can choose what kind of support you want to build. Could someone else please test this version for me? Again, sorry if the patch itself is not the best quality. If anyone has any tips on how I can do it better I would love to hear them.
Created attachment 71737 [details] VisualC++ dsp file
Created attachment 71738 [details] VisualC++ dsw file
I have tested this code with XPConnect and Mozilla 0.9.9 on Win2k and it seems to work ok for me. It was originally developed on Win95 and Mozilla 0.9.8. I am having trouble getting my code to work on Netscape 4.79 on Windows 2000. It fails around this line in GetProperty(): const char* property = JRI_GetStringUTFChars( env, a ); --> NPP npp = (NPP)self->getPeer(env); This ends up calling the function: return (jint)JRI_CallMethodInt(env)(env, JRI_CallMethod_op, self, methodID_netscape_plugin_Plugin_getPeer); which fails with an unhandled exception. It is my understanding that LiveConnect is not (nor will ever be) part of mozilla, however this is where Adam's ActiveX control is hosted, and it seems like it would be good functionality to have for Netscape 4.x. Is anyone else out there testing any of this code with either XPConnect or LiveConnect?
Future. Patches looks good so I will attempt to incorporate them into the plugin when time allows.
Target Milestone: --- → Future
Created attachment 77046 [details] updated LegacyPlugin.cpp Adam, Thanks for reviewing and accepting the patch. We have since updated LegacyPlugin.cpp to incorporate SetProperty() for LiveConnect and XPConnect as well. There is also a GetNProperty() in the XPConnect implementation that will return a value as a true JavaScript number type. I could not find a way to return a number type using overloading, so this is the way we went. I don't know if you will want to bother including that. GetProperty() can simply do an itoa() conversion. I also figured out that the problems I was having with 4.x on win2k was due to a problem with the npp->getPeer call. This was a FAQ on the newsgroups a while ago and is fixed here. Good luck with the 1.0 release. I'm sure you are all very busy.
Created attachment 77047 [details] updated interfaces idl file includes interfaces for Get and SetProperty()
Created attachment 81715 [details] [diff] [review] Unified patch This patch is a substantially cleaned up and unified version of the preceding ones. I've also rewritten makefile.win to stick all the configurable things closer to the top. The plugin builds with no scripting support and with LiveConnect but there is a nasty dependency issue in XPConnect - it's dependent on nsMemory::Alloc and I'd like to be able to squash that if possible. This patch builds but scripting has not been tested to see if it works. From the looks of it, a lot of the LiveConnect/XPConnect functionality (e.g. in the Invoke functions) should be shared. I am mulling over checkin options. Since the plugin has NO scripting at the moment, and all the scripting code is #ifdef'd I don't see any harm checking in with if XPConnect/LiveConnect stuff disabled by the makefile.
Attachment #71330 - Attachment is obsolete: true
Attachment #71733 - Attachment is obsolete: true
Attachment #77046 - Attachment is obsolete: true
Attachment #77047 - Attachment is obsolete: true
I've checked in the scripting work in progress as is. The makefile.win disables LiveConnect/XPConnect support by default so it should behave exactly as before unless you uncomment the settings that turn this feature on. XPConnect has a link error so I commented out the actual functionality in the script peer methods until we can figure a way to allocate XPCOM memory without pulling in XPCOM (if that's possible). LiveConnect builds but I'm still having problems trying to persuade NS 4.x that it actually is a LiveConnect'ed plugin. Once I resolve these I will actually be able to work on the invoke/set/getProperty methods. I've copied npmozax.dll & MozAxPlugin.class into the plugins dir and my NPP_GetJavaClass looks right so I'm not sure why its reporting my object has no methods or properties. Does anyone have any ideas? I'll will attach the documentation for the defunct NCompass ActiveX plugin (dragged from) which I've been using as a reference as a good way to expose ActiveX via LiveConnect.
Created attachment 82110 [details] NCompass - Activating Your Web Site with ScriptActive
Created attachment 82111 [details] NCompass - Authoring ActiveX Controls for the ScriptActive Plug-in
See also here for a few other bits and pieces:
Made another checkin today. The major issues with LiveConnect (pure evil!) have been resolved and the plugin now has basic functionality - invoke() works and setProperty() might too. I checked in a simple test_liveconnect.html to try it out. Build with LiveConnect enabled and copy MozAxPlugin.class and npmozax.dll to the plugins dir to have a go. I have another checkin pending to make the plugin recognize more basic Java types (current it only does strings), like java.lang.Boolean etc. and convert them into the IDispatch VARIANT counterparts. The getProperty impl will take a little more work since I will have to write a _VariantToJRIObject method to process the result back into LiveConnect land. The same goes for returning a result from the invoke() method. LiveConnect & XPConnect should probably live in seperate files since their impl is going to be quite big. I have yet to do anything significant with XPConnect.
An update on progress: LiveConnect is working pretty well. I have invoke, setProperty & getProperty working with all the hell using JRI and LiveConnect pretty much sorted. I have converters to convert VARIANTs to Java objects and vice versa and versions of invoke for 1-4 arguments and also an arg array (untested). Double & Float support isn't there yet because the java.lang.Double/Float classes have some native methods that I have to implement and haven't gotten around to. XPConnect is working, but still rudimentary. Invoke & setProperty work, getProperty is still commented out. The invoke method takes no arguments and one of the things to figure out is the best way let it take multiple args of varying types. Plugin can be built with no scripting support, or with LiveConnect and/or XPConnect support. Still to do: Doubles & Floats. Event sinks (future enhancement). Align plugin direction with David's work on COM support in JS (tomorrow). Makefile is old style makefile.win and won't work with gmake builds without copying xpidl.exe and some other tools into dist\win32_d.obj\bin. Once LiveConnect & XPConnect are working more or less without obvious issues I'll mark this bug fixed. Future issues can be dealt by new bugs.
XPConnect & LiveConnect scripting are both working now. Visit for more info. Marking FIXED.-
Status: NEW → RESOLVED
Last Resolved: 17 years ago
Resolution: --- → FIXED
Component: Embedding: ActiveX Wrapper → Embedding: ActiveX Wrapper
Product: Core → Core Graveyard
|
https://bugzilla.mozilla.org/show_bug.cgi?id=127710
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
Idea: yjq_naiive Developer: yjq_naiive
1 sounds like the minimum value we can get. Why? Is it always true?
When n ≤ 2, we can do nothing.
When n ≥ 3, since
, n - 1 isn't a divisor of n, so we can choose it and get 1 as the result.
#include <bits/stdc++.h> using namespace std; int main() { int n; scanf("%d", &n); if (n == 2) { puts("2"); } else { puts("1"); } return 0; }
Idea: yanQval Developer: yanQval
Consider the number of people wearing hats of the same color.
let bi = n - ai represent the number of people wearing the same type of hat of i-th person.
Notice that the person wearing the same type of hat must have the same bs. Suppose the number of people having the same bi be t, there would be exactly
kinds of hats with bi wearers. Therefore, t must be a multiple of bi. Also this is also sufficient, just give bi people a new hat color, one bunch at a time.
#include <bits/stdc++.h> using namespace std; const int maxn = 100010; int n; pair<int,int> a[maxn]; int Ans[maxn],cnt; int main(){ ios::sync_with_stdio(false); cin>>n; for(int i=1;i<=n;i++){ cin>>a[i].first; a[i].first=n-a[i].first; a[i].second=i; } sort(a+1,a+n+1); for(int l=1,r=0;l<=n;l=r+1){ for(r=l;r<n&&a[r+1].first==a[r].first;++r); if((r-l+1)%a[l].first){ cout<<"Impossible"<<endl; return 0; } for(int i=l;i<=r;i+=a[l].first){ cnt++; for(int j=i;j<i+a[l].first;++j)Ans[a[j].second]=cnt; } } cout<<"Possible"<<endl; for(int i=1;i<=n;i++)cout<<Ans[i]<<' '; return 0; }
Idea: fjzzq2002 Developer: fjzzq2002
No hint here :)
Let f[i][j] be the ways that there're j bricks of a different color from its left-adjacent brick among bricks numbered 1 to i. Considering whether the i-th brick is of the same color of i - 1-th, we have f[i][j] = f[i - 1][j] + f[i - 1][j - 1](m - 1). f[1][0] = m and the answer is f[n][k].
Also, there is a possibly easier solution. We can first choose the bricks of a different color from its left and then choose a different color to color them, so the answer can be found to be simply
.
//quadratic #include <bits/stdc++.h> using namespace std; const int MOD=998244353; int n,m,k; long long f[2005][2005]; int main() { scanf("%d%d%d",&n,&m,&k); f[1][0]=m; for(int i=1;i<n;++i) for(int j=0;j<=k;++j) (f[i+1][j]+=f[i][j])%=MOD, (f[i+1][j+1]+=f[i][j]*(m-1))%=MOD; printf("%d\n",int(f[n][k])); }
//linear #include <bits/stdc++.h> using namespace std; const int MOD=998244353; typedef long long ll; ll fac[2333],rfac[2333]; ll qp(ll a,ll b) { ll x=1; a%=MOD; while(b) { if(b&1) x=x*a%MOD; a=a*a%MOD; b>>=1; } return x; } int n,m,k; int main() { scanf("%d%d%d",&n,&m,&k); fac[0]=1; for(int i=1;i<=n;++i) fac[i]=fac[i-1]*i%MOD; rfac[n]=qp(fac[n],MOD-2); for(int i=n;i;--i) rfac[i-1]=rfac[i]*i%MOD; printf("%lld\n",m*qp(m-1,k)%MOD*fac[n-1]%MOD *rfac[k]%MOD*rfac[n-1-k]%MOD); }
Idea: fateice Developer: fateice
Sorry for the weak pretest.
Consider the MST of the graph.
Consider the minimum spanning tree formed from the n vertexes, we can find that the distance between two vertexes is the maximum weight of edges in the path between them in the minimum spanning tree (it's clear because of the correctness of Kruskal algorithm).
Take any minimum spanning tree and consider some edge in this spanning tree. If this edge has all special vertexes on its one side, it cannot be the answer. Otherwise, it can always contribute to the answer, since for every special vertex, we can take another special vertex on the other side of this edge. Therefore, answers for all special vertexes are the maximum weight of the edges that have special vertexes on both sides.
Besides implementing this directly, one can also run the Kruskal algorithm and maintain the number of special vertexes of each connected component in the union-find-set. Stop adding new edges when all special vertexes are connected together and the last edge added should be the answer.
#include<bits/stdc++.h> #define L long long #define vi vector<int> #define pb push_back using namespace std; int n,m,t,f[100010],p; bool a[100010]; inline int fa(int i) { return f[i]==i?i:f[i]=fa(f[i]); } struct edge { int u,v,w; inline void unit() { u=fa(u); v=fa(v); if(u!=v) { if(a[u]) f[v]=u; else f[u]=v; if(a[u] && a[v]) p--; if(p==1) { for(int i=1;i<=t;i++) printf("%d ",w); printf("\n"); exit(0); } } } }x[100010]; inline bool cmp(edge a,edge b) { return a.w<b.w; } int main() { int i,j; scanf("%d%d%d",&n,&m,&t); p=t; for(i=1;i<=t;i++) { scanf("%d",&j); a[j]=1; } for(i=1;i<=m;i++) scanf("%d%d%d",&x[i].u,&x[i].v,&x[i].w); sort(x+1,x+m+1,cmp); for(i=1;i<=n;i++) f[i]=i; for(i=1;i<=m;i++) x[i].unit(); return 0; }
Idea: fjzzq2002 Developer: fjzzq2002
How to solve n = 2?
Let
, xmax = 2 × 105.
Since x2i = s2i - s2i - 1 = t2i2 - t2i - 12 ≥ (t2i - 1 + 1)2 - t2i - 12 = 2t2i - 1 + 1, so 2t2i - 1 + 1 ≤ xmax, t2i - 1 < xmax.
For every
, we can precalculate all possible (a, b)s so that x = b2 - a2: enumerate all possible a
, then for every a enumerate b from small to large starting from a + 1 and stop when b2 - a2 > xmax, record this (a, b) for x = b2 - a2. Since
, then
, its complexity is
.
Now, we can try to find a possible s from left to right. Since x2i - 1 is positive, we need to ensure t2i - 2 < t2i - 1. Becuase x2i = t2i2 - t2i - 12, we can try all precalculated (a, b)s such that x2i = b2 - a2. If we have several choices, we should choose the one that a is minimum possible, because if the current sum is bigger, it will be harder for the remaining number to keep positive.
Bonus: Solve n ≤ 103, x2i ≤ 109.
t2i2 - t2i - 12 = (t2i - t2i - 1)(t2i + t2i + 1). Factor x2i.
#include <bits/stdc++.h> using namespace std; typedef long long ll; typedef pair<int,int> pii; int n; ll sq[100099]; #define S 200000 vector<pii> v[S+55]; int main() { for(int i=1;i<=S;++i) { if(i*2+1>S) break; for(int j=i+1;j*(ll)j-i*(ll)i<=S;++j) v[j*(ll)j-i*(ll)i].push_back(pii(i,j)); } scanf("%d",&n); for(int i=2;i<=n;i+=2) { int x; scanf("%d",&x); for(auto t:v[x]) if(sq[i-2]<t.first) { sq[i-1]=t.first,sq[i]=t.second; break; } if(!sq[i-1]) {puts("No"); return 0;} } puts("Yes"); for(int i=1;i<=n;++i) printf("%lld%c",sq[i]*(ll)sq[i]-sq[i-1] *(ll)sq[i-1]," \n"[i==n]); }
#include <bits/stdc++.h> using namespace std; typedef long long ll; int n; vector<int> d[200099]; ll su[100099],x[100099]; int main() { for(int i=1;i<=200000;++i) for(int j=i;j<=200000;j+=i) d[j].push_back(i); scanf("%d",&n); for(int i=2;i<=n;i+=2) scanf("%lld",x+i); for(int i=2;i<=n;i+=2) { for(auto j:d[x[i]]) { int a=j,b=x[i]/j; if(((a+b)&1)||a>=b) continue; int s1=(b-a)/2; if(su[i-2]<(ll)s1*s1) x[i-1]=(ll)s1*s1-su[i-2]; } if(x[i-1]==0) { puts("No"); return 0; } su[i-1]=su[i-2]+x[i-1]; su[i]=su[i-1]+x[i]; } puts("Yes"); for(int i=1;i<=n;i++) printf("%lld%c",x[i]," \n"[i==n]); }
1081F - Tricky Interactor
Idea: fjzzq2002 Developer: fateice, fjzzq2002
Yet another binary search?
What about parity?
Assuming the number of 1s changed from a to b after flipping a range of length s.
Let the number of 1s in the range before this flip be be t, then b - a = (l - t) - t = s - 2t, so
.
So if we made a query l, r, the parities of the lengths of [1, r] and [l, n] are different, we can find out which one is actually flipped by parity of delta. Also if we only want [1, r] or [l, n] to be flipped and nothing else, we can keep querying l, r and record all the flips happened until it's the actual case (since flipping a range twice is equivalent to no flipping at all). The expected number of queries needed is 3.
Let sa, b be, currently a ≡ the number of times [1, r] is flipped
, b ≡ the number of times [r, n] is flipped
, the expectation of remaining number of queries needed to complete our goal. Then s1, 0 = 0 (goal), we're going to find out s0, 0.
s0, 0 = (s0, 1 + s1, 0) / 2 + 1, s0, 1 = (s0, 0 + s1, 1) / 2 + 1, s1, 1 = (s1, 0 + s0, 1) / 2 + 1.
Solve this equation you'll get s0, 0 = 3, s0, 1 = 4, s1, 1 = 3.
Since
, it's easy to arrive at such arrangements.
When
, try to query (1, 1), (1, 1), (2, 2), (2, 2), (3, 3), (3, 3)...(n, n), (n, n) and use the method described above to flip [1, 1], [1, 1], [1, 2], [1, 2], [1, 3], [1, 3]...[1, n], [1, n]. Thus we'll know s1, s1 + s2, s1 + s2 + s3...s1 + s2 + ... + sn and s becomes obvious.
When
, try to query (2, n), (2, n), (1, 2), (1, 2), (2, 3), (2, 3)...(n - 1, n), (n - 1, n) and use the method described above to flip [2, n], [2, n], [1, 2], [1, 2][1, 3], [1, 3]...[1, n], [1, n]. Thus we'll know s2 + s3 + ... + sn, s1 + s2, s1 + s2 + s3...s1 + s2 + ... + sn and s also becomes obvious.
Also, we can flip every range only once instead of twice by recording whether every element is currently flipped or not, although it will be a bit harder to write.
#include <bits/stdc++.h> using namespace std; int n,ss[10005],sn=0,q[333]; bool tf(int l,int r) { cout<<"? "<<l<<" "<<r<<endl<<flush; cin>>ss[++sn]; return (ss[sn]-ss[sn-1])&1; } int main() { cin>>n>>ss[0]; q[n]=ss[0]; for(int i=1;i<n;++i) { int j=i-n%2; if(j==0) { int t[2]={0,0},u=ss[sn]; while(t[0]!=1||t[1]!=0) t[tf(2,n)]^=1; int v=ss[sn]; q[i]=q[n]-(n-1-v+u)/2; while(t[0]||t[1]) t[tf(2,n)]^=1; } else { int t[2]={0,0},u=ss[sn]; while(t[0]!=1||t[1]!=0) t[(tf(j,i)^i)&1]^=1; int v=ss[sn]; q[i]=(i-v+u)/2; while(t[0]||t[1]) t[(tf(j,i)^i)&1]^=1; } } cout<<"! "; for(int i=1;i<=n;++i) printf("%d",q[i]-q[i-1]); cout<<endl<<flush; }
1081G - Mergesort Strikes Back
Idea: quailty Developer: fateice, fjzzq2002, yjq_naiive
How does "merge" work? What is this "mergesort" actually doing?
First, consider how "merge" will work when dealing with two arbitrary arrays. Partition every array into several blocks: find all elements that are larger than all elements before them and use these elements as the start of the blocks. Then, by 'merging' we're just sorting these blocks by their starting elements. (This is because that when we decided to put the start element of a block into the result array, the remaining elements in the block will be sequentially added too since they're smaller than the start element)
Now back to the merge sort. What this "mergesort" doing is basically dividing the array into several segments (i.e. the ranges reaching the depth threshold). In every segment, the relative order stays the same when merging, so the expected number of inversions they contribute is just
(for a segment of length l). Again suppose every segment is divided into blocks aforementioned, we're just sorting those blocks altogether by the beginning elements.
Let's consider inversions formed from two blocks in two different segments. Say the elements forming the inversion are initially the i-th and j-th of those two segments (from left to right), then the blocks they belong start from the maximum of the first i-th and the first j-th of those two segments. Consider the maximum among these i + j numbers, if it is one of these two elements, the inversion can't be formed. The probability that the maximum is neither i nor j is
, and if the maximum is chosen, there are 50% percent of odds that the order of i and j is different from that of two maximums, because these two elements' order can be changed by swapping these two elements while leaving the maximums' order unchanged. So their probability of forming an inversion is just
.
Enumerate two blocks and i and precalculate the partial sum of reciprocals, we can calculate the expectation of inversions formed from these two blocks in O(n) time. However, there might be O(n) blocks. But there is one more property for this problem: those segments are of at most two different lengths. By considering same length pairs only once, we can get a O(n) or
solution.
Do an induction on k.
Suppose for k - 1 we have only segments of length a and a + 1.
If 2|a, let a = 2t, from 2t and 2t + 1 only segments of length t and t + 1 will be formed.
Otherwise, let a = 2t + 1, from 2t + 1 and 2t + 2 still only segments of length t and t + 1 will be formed.
Bonus: solve this problem when the segments' length can be arbitary and n ≤ 105.
#include <bits/stdc++.h> using namespace std; typedef long long ll; #define fi first #define se second #define SZ 123456 int n,k,MOD,ts[SZ],tn=0; ll inv[SZ],invs[SZ]; ll qp(ll a,ll b) { ll x=1; a%=MOD; while(b) { if(b&1) x=x*a%MOD; a=a*a%MOD; b>>=1; } return x; } void go(int l,int r,int h) { if(h<=1||l==r) ts[++tn]=r-l+1; else { int m=(l+r)>>1; go(l,m,h-1); go(m+1,r,h-1); } } map<ll,int> cnt; ll calc(int a,int b) { ll ans=a*(ll)b%MOD; for(int i=1;i<=a;++i) ans-=(invs[i+b]-invs[i])*2LL,ans%=MOD; return ans; } int main() { scanf("%d%d%d",&n,&k,&MOD); for(int i=1;i<=max(n,2);++i) inv[i]=qp(i,MOD-2), invs[i]=(invs[i-1]+inv[i])%MOD; go(1,n,k); ll ans=0; for(int i=1;i<=tn;++i) ++cnt[ts[i]],ans+=ts[i]*(ll)(ts[i]-1)/2,ans%=MOD; for(auto t:cnt) if(t.se>=2) ans+=calc(t.fi,t.fi)*((ll)t.se*(t.se-1)/2%MOD),ans%=MOD; for(auto a:cnt) for(auto b:cnt) if(a.fi<b.fi) ans+=calc(a.fi,b.fi)*((ll)a.se*b.se%MOD),ans%=MOD; ans=ans%MOD*inv[2]%MOD; ans=(ans%MOD+MOD)%MOD; printf("%d\n",int(ans)); }
1081H - Palindromic Magic
Idea: fjzzq2002 Developer: fjzzq2002, yjq_naiive
This problem is quite educational and we didn't expect anyone to pass. :P
What will be counted twice?
Warning: This editorial is probably new and arcane for ones who are not familiar with this field. If you just want to get a quick idea about the solution, you can skip all the proofs (they're wrapped in spoiler tags).
Some symbols: All indices of strings start from zero. xR stands for the reverse of string x. (e.g. 'abc'R = 'cba'), xy stands for concatenation of x and y. (e.g. x = 'a', y = 'b', xy = 'ab'), xa stands for concatenation of a copies of x (e.g. x = 'ab', x2 = 'abab'). x[a, b] stands for the substring of x starting and ending from the a-th and b-th character. (e.g. 'abc'[1, 2] = 'bc')
Border of x: strings with are common prefix & suffix of x. Formally, x has a border of length t (x[0, t - 1]) iff xi = x|x| - t + i (
).
Period of x: x has a period of length t iff xi = xi + t (0 ≤ i < |x| - t). When t||x| we also call t a full period. From the formulas it's easy to see x has a period of length t iff x has a border of length |x| - t. (
)
Power: x is a power iff the minimum full period of x isn't |x|. e.g. abab is a power.
Lemma 1 (weak periodicity lemma): if p and q are periods of s, p + q ≤ |s|, gcd(p, q) is also a period of s.
Suppose p < q, d = q - p.
If |s| - q ≤ i < |s| - d, si = si - p = si + d.
If 0 ≤ i < |s| - q, si = si + q = si + q - p = si + d.
So q - p is also a period. Using Euclid algorithm we can get gcd(p, q) is a period.
Lemma 2: Let S be the set of period lengths ≤ |s| / 2 of s, if S is non-empty,
.
Let min S = v, since v + u ≤ |s|, gcd(v, u) is also a valid period, so gcd(v, u) ≥ v, v|u.
Let border(x) be the longest (not-self) border of x. e.g. border('aba') = 'a', border('ab') = ".
If x is a palindrome, its palindromic prefix and suffix must be its border. Therefore, its (not-self) longest palindromic prefix (suffix) is border(x).
Let |x| = a, x has a border of length b iff xi = xa - b + i (
), x has a palindromic prefix of length b iff xi = xb - 1 - i (
). Since xi = xa - 1 - i, they are just the same.
If S = pq, p and q are palindromic and non-empty, we call (p, q) a palindromic decomposition of S. If S = pq, p and q are palindromic, q is non-empty (p can be empty) we call (p, q) a non-strict palindromic decomposition of S.
Lemma 3: if S = x1 x2 = y1 y2 = z1 z2, |x1| < |y1| < |z1|, x2, y1, y2, z1 are palindromic and non-empty, then x1 and z2 are also palindromic.
Let z1 = y1 v.
vR is a suffix of y2, also a suffix of x2. So v is a prefix of x2, then x1v is a prefix of z1.
Since y1 is a palindromic prefix of z1, z1 = y1 v, |v| is a period of z1. Since x1v is a prefix, so |v| is also a period of x1 v.
Suppose t be some arbitary large number (you can think of it as ∞), then x1 is a suffix of vt.
Since vR is prefix of z1, x1 is a prefix of (vR)t. So x1R is a suffix of vt, then x1 = x1R, so x1 is palindromic. z2 is palindromic similarly.
Lemma 4: Suppose S has some palindromic decomposition. Let the longest palindromic prefix of S be a, S = ax, longest palindromic suffix of S be b, S = yb. At least one of x and y is palindromic.
If none of them are palindromic, let S = pq be a valid palindromic decomposition of S, then S = yb = pq = ax, by Lemma 3, contradiction.
Lemma 5: S = p1 q1 = p2 q2 (|p1| < |p2|, p1, q1, p2, q2 are palindromic, q1 and q2 are non-empty), then S is a power.
We prove this by proving gcd(|p2| - |p1|, |S|) is a period.
Let |S| = n, |p2| - |p1| = t.
Because p1 is a palindromic prefix of p2, t is a period of p2. Similarly t is a period of q1. Since they have a common part of length t (namely S[p1, p2 - 1]), t is a period of S. So t is a period of S.
For
, sx = s|p2| - 1 - x = sn - 1 + |p1| - (|p2| - 1 - x) = sn - t + x (first two equations are because p2 and q1 and palindromic). So n - t is also a period of S.
Since t + n - t = n, gcd(t, n - t) = gcd(t, n) is also a period of S. (weak periodicity lemma)
Lemma 6: Let S = p1 q1 = p2 q2 = ... = pt qt be all non-strict palindromic decompositions of S, h be the minimum full period of S. When t ≠ 0, t = |S| / h.
From Lemma 5, it's clear that h = |S| iff t = 1. In the following t ≥ 2 is assumed.
Let α = S[0, h - 1], because α is not a power (otherwise s will have smaller full period), α has at most one non-strict palindromic decomposition (from Lemma 5).
Let S = pq be any non-strict palindromic decomposition, then max(|p|, |q|) ≥ h. When |p| ≥ h, α = p[0, h - 1], so
, then
is palindromic. Similarly
is also palindromic. When |q| ≥ h similar arguments can be applied. Therefore,
and
is a non-strict palindromic decomposition of α.
Therefore, α has a non-strict palindromic decomposition. Let its only non-strict palindromic decomposition be α[0, g - 1] and α[g, h - 1]. Therefore, every pi must satisfy
, so t ≤ |s| / h. Also, these all |s| / h decompositions can be obtained.
Lemma 7: Let S = p1 q1 = p2 q2 = ... = pt qt be all non-strict palindromic decompositions of S. (|pi| < |pi + 1|) For every
, at least one of pi = border(pi + 1) and qi + 1 = border(qi) is true.
Instead of proving directly, we first introduce a way to compute all decompositions.
Let the longest palindromic prefix of S be a (a ≠ S), S = ax, longest palindromic suffix of S be b (it may be S), S = yb.
If x = b obviously S = ab is the only way to decompose.
If S = pq and p ≠ a, q ≠ b, p and q are palindromic, by Lemma 3 we have x, y are also palindromic.
So if neither x or y is palindromic, then S can't be composed to two palindromes.
If exactly one of x and y is palindromic, it's the only way to decompose S.
If both of them are palindromic, we can then find the second-longest non-self palindromic prefix of S: c. Let S = cz, if z is not palindromic or c = y, then S = ax = yb are the only non-strict palindromic decompositions. Otherwise, we can find all palindromic suffix of |S| whose lengths between |z| and |b|, their prefixes must also be palindromic (using Lemma 3 for ax and cz), then S = ax = cz = ... = yb (other palindromic suffixes and their corresponding prefixes are omitted)
Back to the proof of Lemma 7, the only case we need to prove now is S = ax = yb. Suppose the claim is incorrect, let p = border(a), s = border(y), S = ax = pq = rs = yb, (|a| > |p| > |y|, |a| > |r| > |y|, p and s are palindromic)
Continuing with the proof of Lemma 6, since t = 2, S = α2. If |p| ≥ |α|,
, so q would also be palindromic, contradiction. Therefore, |p| < |α| and similarly |s| < |α|. Let α = pq' = r's and the non-strict palindromic decomposition of α be α = βθ. Since α = pq' = βθ = r's, by Lemma 3 q' and r' should also be palindromic, contradiction.
Lemmas are ready, let's start focusing on this problem. A naive idea is to count the number of palindromes in A and in B, and multiply them. This will obviously count a string many times. By Lemma 7, suppose S = xy, to reduce counting, we can check if using border(x) or border(y) to replace x or y can also achieve S. If any of them do, reduce the answer by 1, then we're done.
So for a palindromic string x in A, we want to count strings that are attainable from both x and border(x) and subtract from the answer. Finding x and border(x) themselves can be simply done by the palindromic tree.
Let x = border(x)w, we want to count Ts in B that T = wS and both T and S are palindromic. Since |w| is the shortest period of x, w can't be a power.
If |w| > |S|, w = S + U. S are U are both palindromes. Since w is not a power, it can be decomposed to be two palindromes in at most one way (Lemma 5). We can find that only way (by checking maximal palindromic suffix & prefix) and use hashing to check if it exists in B.
If |w| ≤ |S|, if S is not maximum palindromic suffix of T, w must be a power (Lemma 2), so we only need to check maximum palindromic suffixes (i.e. S = border(T)).
We need to do the similar thing to ys in B, then adding back both attainable from border(x) and border(y). Adding back can be done in a similar manner, or directly use hashing to find all matching w s.
Finding maximal palindromic suffix and prefix of substrings can be done by binary-lifting on two palindromic trees (one and one reversed). Let upi, j be the resulting node after jumping through fail links for 2j steps from node i. While querying maximal palindromic suffix for s[l, r], find the node corresponding to the maximal palindromic suffix of s[1, r] (this can be stored while building palindromic tree). If it fits into s[l, r] we're done. Otherwise, enumerate j from large to small and jump 2j steps (with the help of up) if the result node is still unable to fit into s[l, r], then jump one more step to get the result.
#include <bits/stdc++.h> using namespace std; const int N = 234567; const int LOG = 18; const int ALPHA = 26; const int base = 2333; const int md0 = 1e9 + 7; const int md1 = 1e9 + 9; struct hash_t { int hash0, hash1; hash_t(int hash0 = 0, int hash1 = 0):hash0(hash0), hash1(hash1) { } hash_t operator + (const int &x) const { return hash_t((hash0 + x) % md0, (hash1 + x) % md1); }; hash_t operator * (const int &x) const { return hash_t((long long)hash0 * x % md0, (long long)hash1 * x % md1); } hash_t operator + (const hash_t &x) const { return hash_t((hash0 + x.hash0) % md0, (hash1 + x.hash1) % md1); }; hash_t operator - (const hash_t &x) const { return hash_t((hash0 + md0 - x.hash0) % md0, (hash1 + md1 - x.hash1) % md1); }; hash_t operator * (const hash_t &x) const { return hash_t((long long)hash0 * x.hash0 % md0, (long long)hash1 * x.hash1 % md1); } long long get() { return (long long)hash0 * md1 + hash1; } } ha[N], hb[N], power[N]; struct palindrome_tree_t { int n, total, p[N], pos[N], value[N], parent[N], go[N][ALPHA], ancestor[LOG][N]; char s[N]; palindrome_tree_t() { parent[0] = 1; value[1] = -1; total = 1; p[0] = 1; } int extend(int p, int w, int n) { while (s[n] != s[n - value[p] - 1]) { p = parent[p]; } if (!go[p][w]) { int q = ++total, k = parent[p]; while (s[n] != s[n - value[k] - 1]) { k = parent[k]; } value[q] = value[p] + 2; parent[q] = go[k][w]; go[p][w] = q; pos[q] = n; } return go[p][w]; } void init() { for (int i = 1; i <= n; ++i) { p[i] = extend(p[i - 1], s[i] - 'a', i); } for (int i = 0; i <= total; ++i) { ancestor[0][i] = parent[i]; } for (int i = 1; i < LOG; ++i) { for (int j = 0; j <= total; ++j) { ancestor[i][j] = ancestor[i - 1][ancestor[i - 1][j]]; } } } int query(int r, int length) { r = p[r]; if (value[r] <= length) { return value[r]; } for (int i = LOG - 1; ~i; --i) { if (value[ancestor[i][r]] > length) { r = ancestor[i][r]; } } return value[parent[r]]; } bool check(int r, int length) { r = p[r]; for (int i = LOG - 1; ~i; --i) { if (value[ancestor[i][r]] >= length) { r = ancestor[i][r]; } } return value[r] == length; } } A, B, RA, RB; map<long long, int> fa, fb, ga, gb; long long answer; char a[N], b[N]; int n, m; hash_t get_hash(hash_t *h, int l, int r) { return h[r] - h[l - 1] * power[r - l + 1]; } int main() { #ifdef wxh010910 freopen("input.txt", "r", stdin); #endif scanf("%s %s", a + 1, b + 1); n = strlen(a + 1); m = strlen(b + 1); A.n = RA.n = n; B.n = RB.n = m; for (int i = 1; i <= n; ++i) { A.s[i] = RA.s[n - i + 1] = a[i]; ha[i] = ha[i - 1] * base + a[i]; } for (int i = 1; i <= m; ++i) { B.s[i] = RB.s[m - i + 1] = b[i]; hb[i] = hb[i - 1] * base + b[i]; } power[0] = hash_t(1, 1); for (int i = 1; i <= max(n, m); ++i) { power[i] = power[i - 1] * base; } A.init(); B.init(); RA.init(); RB.init(); answer = (long long)(A.total - 1) * (B.total - 1); for (int i = 2; i <= A.total; ++i) { ++fa[get_hash(ha, A.pos[i] - A.value[i] + 1, A.pos[i]).get()]; int p = A.parent[i]; if (p < 2) { continue; } int l = A.pos[i] - (A.value[i] - A.value[p]) + 1, r = A.pos[i]; if (A.value[i] <= A.value[p] << 1) { ++ga[get_hash(ha, l, r).get()]; } } for (int i = 2; i <= B.total; ++i) { ++fb[get_hash(hb, B.pos[i] - B.value[i] + 1, B.pos[i]).get()]; int p = B.parent[i]; if (p < 2) { continue; } int l = B.pos[i] - B.value[i] + 1, r = B.pos[i] - B.value[p]; if (B.value[i] <= B.value[p] << 1) { ++gb[get_hash(hb, l, r).get()]; } } for (int i = 2; i <= A.total; ++i) { int p = A.parent[i]; if (p < 2) { continue; } int l = A.pos[i] - (A.value[i] - A.value[p]) + 1, r = A.pos[i]; long long value = get_hash(ha, l, r).get(); if (gb.count(value)) { answer -= gb[value]; } int longest_palindrome_suffix = A.query(r, r - l + 1); if (longest_palindrome_suffix == r - l + 1) { continue; } if (RA.check(n - l + 1, r - l + 1 - longest_palindrome_suffix)) { int length = r - l + 1 - longest_palindrome_suffix; if (fb.count(get_hash(ha, l, l + length - 1).get()) && fb.count((get_hash(ha, l, r) * power[length] + get_hash(ha, l, l + length - 1)).get())) { --answer; } continue; } int longest_palindrome_prefix = RA.query(n - l + 1, r - l + 1); if (A.check(r, r - l + 1 - longest_palindrome_prefix)) { int length = longest_palindrome_prefix; if (fb.count(get_hash(ha, l, l + length - 1).get()) && fb.count((get_hash(ha, l, r) * power[length] + get_hash(ha, l, l + length - 1)).get())) { --answer; } continue; } } for (int i = 2; i <= B.total; ++i) { int p = B.parent[i]; if (p < 2) { continue; } int l = B.pos[i] - B.value[i] + 1, r = B.pos[i] - B.value[p]; long long value = get_hash(hb, l, r).get(); if (ga.count(value)) { answer -= ga[value]; } int longest_palindrome_suffix = B.query(r, r - l + 1); if (longest_palindrome_suffix == r - l + 1) { continue; } if (RB.check(m - l + 1, r - l + 1 - longest_palindrome_suffix)) { int length = longest_palindrome_suffix; if (fa.count(get_hash(hb, r - length + 1, r).get()) && fa.count((get_hash(hb, r - length + 1, r) * power[r - l + 1] + get_hash(hb, l, r)).get())) { --answer; } continue; } int longest_palindrome_prefix = RB.query(m - l + 1, r - l + 1); if (B.check(r, r - l + 1 - longest_palindrome_prefix)) { int length = r - l + 1 - longest_palindrome_prefix; if (fa.count(get_hash(hb, r - length + 1, r).get()) && fa.count((get_hash(hb, r - length + 1, r) * power[r - l + 1] + get_hash(hb, l, r)).get())) { --answer; } continue; } } for (int i = 2; i <= A.total; ++i) { int p = A.parent[i]; if (p < 2) { continue; } int l = A.pos[i] - (A.value[i] - A.value[p]) + 1, r = A.pos[i]; if (A.value[i] > A.value[p] << 1) { ++ga[get_hash(ha, l, r).get()]; } } for (int i = 2; i <= B.total; ++i) { int p = B.parent[i]; if (p < 2) { continue; } int l = B.pos[i] - B.value[i] + 1, r = B.pos[i] - B.value[p]; if (B.value[i] > B.value[p] << 1) { ++gb[get_hash(hb, l, r).get()]; } } for (auto p : ga) { answer += (long long)p.second * gb[p.first]; } printf("%lld\n", answer); return 0; }
Hope you enjoyed the round! See you next time!
|
https://codeforces.com/topic/64282/en6
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
Modules in Perl
Over the years, Perl has grown to include many new features. These features make the language richer and more usefulbut they also make it more complex for programmers and the Perl maintainers.
Perl 5 partly solved this problem by making the language extensible with modules that range widely in complexity, adding anything from convenience variables to sophisticated database clients and Web development environments. Modules have made it possible for Perl to improve incrementally without changing the language itself.
This chapter begins with a discussion of Perl packages, which allow us to place variables and subroutines in a namespace hierarchy. Once we have discussed packages, we begin to discuss moduleshow to use them, how to write them, and how to improve them.
By the end of this chapter, you should understand not just how Perl modules work, but also how you can use them effectively in your programs.
6.1 Packages
Programmers working on large projects often discover that a variable or subroutine name is being used by someone else. Perl and other languages provide packages, or namespaces, which make it easier to avoid such clashes. Packages are analogous to surnames in human society, allowing more than one David or Jennifer to coexist unambiguously.
6.1.1 Packages
Every global variable in Perl exists within a package, with the default package being main. The global variable $x is actually shorthand for $main::x, where main is the package, $x is the variable, and :: separates the package name from the unqualified variable name.
We can similarly refer to variables in other packages. For example, $fruit::mango, @fruit::kiwi, and %fruit::apple are all in the fruit package. As you can see, symbols representing a data type ($, @, or %) precede the package name, not the unqualified variable name. As with variables, packages spring to life when they are first referenced.
Package names may contain ::, allowing us to create what appear to be hierarchies. For instance, $fruit::tropical::kiwi is the variable $kiwi in the package fruit::tropical. However, these names are only significant to programmers; Perl does not notice or enforce hierarchies. As far as Perl is concerned, two unrelated modules can be under the same package hierarchy, and two related modules can be in completely different packages.
At any time in our program, we can set or retrieve the value of any global variable by giving its fully qualified name:
$main::x = 5; $blueberry::x = 30; print "main::x = $main::x\n"; print "blueberry::x = $blueberry::x\n";
6.1.2 Lexicals and packages
Lexicals exist outside of a package, in a separate area known as the scratchpad. They have nothing to do with packages or global variables. There is no relationship between $main::var and the lexical $var, except in the mind of a programmer. This program is perfectly legal, but hard for programmers to understand:
#!/usr/bin/perl # filename: globals-and-lexicals.pl use warnings; $ main::x = 10; # Global my $x = 20; # Lexical print "x = '$x'\n"; # Prints 20 (lexical) print "main::x = '$main::x'\n"; # Prints 10 (global)
Once the lexical $x is declared, $main::x must be retrieved with its fully qualified name. Otherwise, Perl will assume that $x refers to the lexical $x, rather than the global $main::x.
6.1.3 use strict
use strict tells the Perl compiler to forbid the use of unqualified global variables, avoiding the ambiguity that we saw in the preceding program. When use strict is active, $x must refer to a lexical explicitly declared with my. If no such lexical has been declared, the program exits with a compilation error:
#!/usr/bin/perl # filename: counter.pl use strict; use warnings; # Declare $counter lexical within the foreach loop foreach my $counter (0 .. 10) { print "Counter = $counter\n"; $counter++; } # $counter has disappeared -- fatal compilation error! print "Counter at the end is $counter\n";
We can fix this program by declaring $counter to be a top-level lexical:
#!/usr/bin/perl # filename: new-counter.pl use strict; use warnings; # Declare $counter to be lexical for the entire program my $counter; # Declare $index to be lexical within the foreach foreach my $index (0 .. 10) { print "Counter = $counter\n"; $counter++; } # Counter still exists print "Counter at the end is $counter\n";
6.1.4 use vars and our
Experienced Perl programmers include use strict in their programs, because of the number of errors it traps. However, referring to globals by their full names quickly gets tedious.
use vars helps by making an exception to use strict. Variables named in the list passed to use vars can be referred to by their unqualified names, even when use strict is active. For example, the following code tells Perl that $a, $b, and $c in the current package do not need to be fully qualified:
use vars qw($a $b $c);
In the case of a conflict between my and use vars, the lexical has priority. (After all, you can always set and retrieve the global's value using its fully qualified name, but the lexical has only one name.) The following program demonstrates this:
#!/usr/bin/perl # filename: globals-and-lexicals-2.pl use strict; use warnings; use vars qw($x); # Allows us to write $main::x as $x $x = 10; # Sets global $main::x my $x = 20; # Sets lexical $x $x = 30; # Sets lexical $x, not global $main::x print "x = '$x'\n"; # Prints 30 (lexical) print "main::x = '$main::x'\n"; # Prints 10 (global)
As of Perl 5.6, use vars has been deprecated in favor of our. our is similar to my, in that its declarations only last through the current lexical scope. However, our (like use vars) works with global variables, not lexicals. We can rewrite this program as follows using our:
#!/usr/bin/perl # filename: globals-and-lexicals-with-our.pl use strict; use warnings; our $x; # Allows us to write $main::x as $x $x = 10; # Sets global $main:: my $x = 20; x # Sets lexical $x $x = 30; # Sets lexical $x, not global $main::x print "x = '$x'\n"; # Prints 30 (lexical) print "main::x = '$main::x'\n"; # Prints 10 (global)
6.1.5 Switching default packages
To change to a new default package, use the package statement:
package newPackageName;
A program can change default packages as often as it might like, although doing so can confuse the next person maintaining your code. Remember that package changes the default namespace; it does not change your ability to set or receive any global's value by explicitly naming its package.
There is a subtle difference between use vars and our that comes into play when we change packages. use vars ceases to have effect when you change to a different default package. For example:
package foo; # Make the default package 'foo' use vars qw($x); # $x is shorthand for $foo::x $x = 5; # Assigns $foo::x package bar; # Make the default package 'bar' print "'$x'\n"; # $x refers to $bar::x (undefined) package foo; # Make the default package 'foo' (again) print "'$x'\n"; # $x refers to $foo::x
In this code, use vars tells Perl that $x is shorthand for $foo::x. When we switch into default package bar, $x no longer refers to $foo::x, but $bar::x. Without use strict, Perl allows us to retrieve the value of an undeclared global variable, which has the value undef. When we return to package foo, $x once again refers to $foo::x, and the previous value is once again available.
By contrast, global variables declared with our remain available with their short names even after changing into a different package:
package foo; # Make the default package 'foo' our $x; # $x is shorthand for $foo::x $x = 5; # Assigns $foo::x package bar; # Make the default package 'bar' print "'$x'\n"; # $x still refers to $foo::x
For $x to refer to $bar::x, we must add an additional our declaration immediately following the second package statement:
package foo; # Make the default package 'foo' our $x; # $x is shorthand for $foo::x $x = 5; # Assigns $foo::x package bar; # Make the default package 'bar' our $x; # $x is now shorthand for $bar::x print "'$x'\n"; # $x now refers to $bar::x
6.1.6 Subroutines
When we declare a subroutine, it is placed by default in the current package:
package abc; sub foo {return 5;} # Full name is abc::foo
This means that when working with more than one package, we may need to qualify subroutine names:
package numbers; # Default package is "numbers" sub gimme_five { 5; } # Define a subroutine package main; # Default package is now "main" my $x = gimme_five(); # Fails to execute main::gimme_five() print "x = '$x'\n"; # Prints 5
This code exits with a fatal error, with Perl complaining that no subroutine main::gimme five has been defined. We can fix this by invoking the subroutine with its fully qualified name:
Package numbers; # Default package is "numbers" sub gimme_five { 5; } # Define a subroutine package main; # Default package is now "main" my $ x = numbers::gimme_five(); # Qualify gimme_five() with a package print "x = '$x'\n"; # Prints 5
|
http://www.informit.com/articles/article.aspx?p=29475&seqNum=2
|
CC-MAIN-2019-09
|
en
|
refinedweb
|
We are pleased to announce an updated release of the Windows Azure Storage Client for Java. This release includes several notable features such as logging support, new API overloads, and full support for the 2013-08-15 REST storage server version. (See here for details). As usual all of the source code is available via github (note the updated location). You can download the latest binaries via maven:
<dependency>
<groupId>com.microsoft.windowsazure.storage</groupId>
<artifactId>microsoft-windowsazure-storage-sdk</artifactId>
<version>0.5.0</version>
</dependency>
Emulator Guidance
Please note, that the 2013-08-15 REST is currently unsupported by the storage emulator. An updated Windows Azure Storage Emulator is expected to ship with full support of these new features in the next couple of months. Users attempting to develop against the current version of the Storage emulator will receive Bad Request errors in the interim. Until then, users wanting to use the new features would need to develop and test against a Windows Azure Storage Account to leverage the 2013-08-15 REST version.
Samples
We have provided a series of samples on github to help clients get up and running with each storage abstraction and to illustrate some additional key scenarios. To run a given sample using Eclipse, simply load the samples project and update the following line in Utility.java to provide your storage credentials.
public static final String storageConnectionString = "DefaultEndpointsProtocol=http;AccountName=[ACCOUNT_NAME];AccountKey=[ACCOUNT_KEY]";
If you wish to use fiddler to inspect traffic while running the samples, please uncomment the following 2 lines in Utility.java.
// System.setProperty("http.proxyHost", "localhost");
// System.setProperty("http.proxyPort", "8888");
After updating Utiltity.java, right-click on the specific project you want to run and click on Run As Java Application.
A Note about Packaging and Versioning
We have migrated the Storage package out of the larger Windows Azure SDK for Java for this release. Developers who are currently leveraging the existing SDK will need to update their dependencies accordingly. Furthermore, the package names have been changed to reflect this new structure:
- com.microsoft.windowsazure.storage – RetryPolicies, LocationMode, StorageException, Storage Credentials etc. All public classes that are common across services
- com.microsoft.windowsazure.storage.blob – Blob convenience implementation, applications utilizing Windows Azure Blobs should include this namespace in their import statements
- com.microsoft.windowsazure.storage.queue – Queue convenience implementation, applications utilizing Windows Azure Queues should include this namespace in their import statements
- com.microsoft.windowsazure.storage.table – Table convenience implementation, applications utilizing Windows Azure Tables should include this namespace in their import statements
For a more detailed list of changes in this release, please see the Change Log & Breaking Changes section below.
We are also adopting the SemVer specification regarding all of the storage client sdk components we provide. This will help provide consistent and predictable versioning guidance to developers who leverage the sdk.
Whats New
The 0.5.0 version of the Java client library provides full support for the 2013-08-15 REST service version (you can read more about the supported features here), as well as key client improvements listed below.
Support for Read Access Geo Redundant Storage
This release has full support for Read Access to the storage account data in the secondary region. This functionality needs to be enabled via the portal for a given storage account. You can read more about RA-GRS here. As mentioned in the blog, there is a getServiceStats API for Cloud[Blob|Table|Queue]Client that allows applications to easily retrieve the replication status and LastSyncTime for each service. Setting the location mode on the client object and invoking getServiceStats is shown in the example below. The LocationMode can also be configured on a per request basis by setting it on the RequestOptions object.
CloudStorageAccount httpAcc = CloudStorageAccount.parse(connectionString);
CloudTableClient tClient = httpAcc.createCloudTableClient();
// Set the LocationMode to SECONDARY_ONLY since getServiceStats is supported only on the secondary endpoints.
tClient.setLocationMode(LocationMode.SECONDARY_ONLY);
ServiceStats stats = tClient.getServiceStats();
Date lastSyncTime = stats.getGeoReplication().getLastSyncTime();
System.out.println(String.format("Replication status = %s and LastSyncTime = %s",stats.getGeoReplication().getStatus().toString(), lastSyncTime != null ? lastSyncTime.toString(): "empty"));
Expanded Table Protocol Support (JSON)
In the previous release all table traffic was sent using the AtomPub protocol. With the current release the default protocol is now JSON minimal metadata. (You can read more details regarding these protocols as well as view sample payloads here) This improvement allows the client to dramatically reduce the payload size of the request as well as reduce the CPU required to process the request. These improvements allow client applications to scale higher and realize lower overall latencies for table operations. An example of setting the tablePayloadFormat on the client object is shown in the example below. The tablePayloadFormat can also be configured on a per request basis by setting it on the TableRequestOptions object.
CloudStorageAccount httpAcc = CloudStorageAccount.parse(connectionString);
CloudTableClient tClient = httpAcc.createCloudTableClient();
// Set the payload format to JsonNoMetadata.
tableClient.setTablePayloadFormat(TablePayloadFormat.JsonNoMetadata);
When using JsonNoMetadata the client library will “infer” the property types by inspecting the type information on the POJO entity type provided by the client. Additionally, in some scenarios clients may wish to provide the property type information at runtime such as when querying with the DynamicTableEntity or doing complex queries that may return heterogeneous entities. To support this scenario the user should implement PropertyResolver which allows users to return an EdmType for each property based on the data received from the service. The sample below illustrates a propertyResolver implementation.
public static class Class1 extends TableServiceEntity implements PropertyResolver {
private String A;
private byte[] B;
public String getA() {
return this.A;
}
public byte[] getB() {
return this.B;
}
public void setA(final String a) {
this.A = a;
}
public void setB(final byte[] b) {
this.B = b;
}
@Override
public EdmType propertyResolver(String pk, String rk, String key, String value) {
if (key.equals("A")) {
return EdmType.STRING;
}
else if (key.equals("B")) {
return EdmType.BINARY;
}
return null;
}
}
This propertyResolver is set on the TableRequestOptions as shown below.
Class1 ref = new Class1();
ref.setA("myPropVal");
ref.setB(new byte[] { 0, 1, 2 });
ref.setPartitionKey("testKey");
ref.setRowKey(UUID.randomUUID().toString());
options.setPropertyResolver(ref);
Table Insert Optimizations
In previous versions of the REST api the Prefer header was not supported for Insert operations. As a result, TableResult for successful inserts to be 204 (no-content) rather than 201 (Created). The echo content behavior can be re-enabled by using the insert(TableEntity, boolean) method and specifying true.
Table Reflection Optimizations
When clients are persisting POJO objects to the Table Service the client can now cache the type and property information to avoid repeated reflection calls. This optimization can dramatically reduce CPU during queries and other table operations. Note, clients can disable this cache by setting TableServiceEntity.setReflectedEntityCacheDisabled(true).
New APIs and overloads
In response to customer feedback we have expanded the api surface to add additional conveniences, including:
- CloudBlob.downloadRange
- CloudBlob.downloadToByteArray
- CloudBlob.downloadRangeToByteArray
- CloudBlob.uploadFromByteArray
- CloudBlob.uploadFromByteArray
- CloudBlob.downloadToFile
- CloudBlob.uploadFromFile
- CloudBlockBlob.uploadText
- CloudBlockBlob.downloadText
Logging
The 0.5.0 release supports logging via the SLF4J logging facade. This allows users to utilize various logging frameworks in conjunction with the storage api in order to log information regarding request execution (See below for a table of what information is logged). We plan on providing additional rich logging information in subsequent releases to further assist clients in debugging their applications.
Logged Data
Each log line will include the following data:
- Client Request ID: Per request ID that is specified by the user in OperationContext
- Event: Free-form text
- Any other information, as determined by the user-chosen underlying logging framework
For example, if the underlying logging framework chosen is Simple, the log line might look something like the following:
[main] INFO ROOT – {c88f2fe5-7150-467b-8571-7aa40a9a3e27}: {Starting operation.}
Trace Levels
*Please take care when enabling logging while using SAS as the SAS tokens themselves will be logged. As such, clients using SAS with logging enabled should take care to protect the logging output of their application.
Enabling Logging an opt-in model for logging.
Another key concept is the facade logging model. SLF4J is a facade and does not provide a logging framework. Instead, the user may choose the underlying logging system, whether it be the built in jdk logger or one of the many open-source alternatives (see the SLF4J site for a list of compatible loggers). Once a logger is selected clients can add the corresponding jar which binds the facade to the chosen framework and the application will log using the settings specific to that framework. As a result, if the user has already chosen a logging framework for their application the storage sdk will work with that framework rather than requiring a separate one. The façade model allows users to change the logging framework easily throughout the development process and avoid any framework lock in.
Choose a Logging Framework
To enable logging, first choose a logging framework. If you already have one, simply add the corresponding SLF4J binding jar to your classpath or add a dependency to it in Maven. If you do not already have a logging framework, you will need to choose one and add it to your classpath or add a dependency to it in Maven. If it does not natively implement SLF4J you will also need to add the corresponding SLF4J binding jar. SLF4J comes with its own logger implementation called Simple, so we will use that in our example below. Either download the slf4j-simple-1.7.5.jar from SLF4J and add it to your classpath or include the following Maven dependency:
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
<version>1.7.5</version>
</dependency>
This will enable logging to the console with a default log level of info. To change the logging settings, follow the directions for Simple.
1. Turn Logging on in the SDK
By default, the Azure Storage Java SDK will not produce any logs, even if it finds a logging framework and corresponding SLF4J binding. This way, the user is not forced to edit any log framework settings if they do not want Azure Storage logs. If logging is turned on, the root logger settings will be used by default. A different logger may be specified on a per request basis.
Example 1: Enable logging on for every request:
OperationContext.setLoggingEnabledByDefault(true);
Example 2: Enable logging for a single request:
OperationContext ctx = new OperationContext();
ctx.setLoggingEnabled(true);
blockBlobRef.upload(srcStream, -1, null /* accessCondition */, null /* requestOptions */, ctx);
Example 3: To use a specific logger for a request:
OperationContext ctx = new OperationContext();
// turn logging on for that operation context
ctx.setLoggingEnabled(true);
// the slf4j logger factory will get the logger with this name and use
// this logger’s settings include location to log and log level
ctx.setLogger(LoggerFactory.getLogger(“MyLogger”));
blockBlobRef.upload(srcStream, -1, null /* accessCondition */, null /* requestOptions */, ctx);
With client side logging used in conjunction with storage service logging clients can now get a complete view of their application from both the client and server perspectives.
Change Log & Breaking Changes
As mentioned above, this release supports the 2013-08-15 REST service version and details of features and changes in the version can be found in MSDN and also blogged here. In addition to the REST changes, this release includes several client side changes and features. Key changes to note for this release are highlighted below. You can view the complete ChangeLog and BreakingChanges log on github.
Common
- Package Restructure
- RetryResult has been replaced by RetryInfo which provides additional functionality
- Event operations (including event firing) that occur during a request are no longer synchronized, (thread safety is now guaranteed by a CopyOnWriteArrayList of the event listeners)
- OperationContext.sendingRequest event is now fired prior to the connection being established, allowing users to alter headers
Blob
- Blob downloadRange now downloads to a Stream. The previous downloadRange has been renamed to downloadRangeToByteArray.
- Removed sparse page blob feature
- CloudBlobContainer.createIfNotExist was renamed to CloudBlobContainer.createIfNotExists
- CloudBlobClient.streamMinimumReadSizeInBytes has been removed. This functionality is now provided by CloudBlob.streamMinimumReadSizeInBytes (settable per-blob, not per-client.)
- CloudBlobClient.pageBlobStreamWriteSizeInBytes and CloudBlobClient.writeBlockSizeInBytes have been removed. This functionality is now provided by CloudBlob.streamWriteSizeInBytes.
Table
- Removed id field (along with getId, setId) from TableResult
- CloudTable.createIfNotExist was renamed to CloudTable.createIfNotExists
- Inserts in operations no longer echo content. Echo content can be re-enabled by using the insert(TableEntity, boolean) method and specifying true. This will cause the resulting HTTP status code on the TableResult for successful inserts to be 204 (no-content) rather than 201 (Created).
- JsonMinimalMetadata is now the default payload format (rather than AtomPub). Payload format can be specified for all table requests by using CloudTableClient.setTablePayloadFormat or for an individual table request by using TableRequestOptions.setTablePayloadFormat.
Queue
- CloudQueue.createIfNotExist was renamed to CloudQueue.createIfNotExists
Summary
We are continuously making improvements to the developer experience for Windows Azure Storage and very much value your feedback in the comments section below, the forums, or GitHub. If you hit any issues, filing them on GitHub will also allow you to track the resolution.
Joe Giardino, Veena Udayabhanu, Emily Gerner, and Adam Sorrin
Resources
Windows Azure Storage Release – Introducing CORS, JSON, Minute Metrics, and More
Windows Azure Tables: Introducing JSON
Windows Azure Storage Redundancy Options and Read Access Geo Redundant Storage
"CloudBlob.uploadFromByteArray" is duplicated.
Using this API to download Page Blobs – VHDs (>2GB): downloadToFile.
Download API returns back with a Storage Exception (An incorrect number of bytes was read from the connection. The connection may have been closed). Reading from source input stream and writing back to output stream is not completing properly and before that it exits.
Setting default timeout value to 0 – options.setTimeoutIntervalInMs(0);
Am I missing something anywhere ? Is that related to HttpConnectionUrl settings/timeout ?
This is a known issue in version 0.5.0. Blob downloads are not retried by default and therefore if you hit an exception while downloading a blob due to a network issue, you would get a non-retryable exception. We have fixed this in our latest release (0.6.0). You can grab it from here – mvnrepository.com/…/0.6.0
|
https://blogs.msdn.microsoft.com/windowsazurestorage/2013/12/19/windows-azure-storage-client-library-for-java-v-0-5-0/
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
hi really very sorry to ask the same questions..still i am not getting clear idea ...genarally learn new things easily..now i cont understand abt abstract and interafces..plz explain with clear coding example...i am worrying like anything
I've sent you two links for both, why didn't you reply with your questions to help you?!
Here you are again
Abstract classes:
Interfaces:
Abstract Classes
Sometimes you may want to declare a class and yet not know how to define all of the methods that belong to that class. For example, you may want to declare a class called Writer and include in it a member method called Write. However, you don't know how to code Write() because it is different for each type of Writer Device. Of course, you plan to handle this by deriving subclasses of Writer, such as Printer, Disk, Network and Console. But what code do you put in the Write() function of Writer class itself?
In Java you can declare the Write() function of Writer as an abstract method. Doing so allows you to declare the method without writing any code for it in that class. However, you can write code for the method in the subclass. If a method is declared abstract, then the class must also be declared as abstract. For Writer and its subclasses, this means they would appear as follows:
abstract class Writer { abstract void Write(); } public class Printer extends Writer{ public void Write() { // Printer code } } public class Disk extends Writer{ public void Write() { // File I-O } } public class Network extends Writer{ public void Write() { // Network I/O } }
Note
A method that is private or static cannot also be declared abstract. Because a private method cannot be overridden in a subclass, a private abstract method would not be usable. Similarly, because all static methods are implicitly final, static methods cannot be overridden.
Abstract Method: This is a method that is incomplete; it has only a declaration and no method body. Here is the syntax for an abstract method declaration:
[B]abstract void f();[/B]
A class containing abstract methods is called an abstract class. If a class contains one or more abstract methods, the class itself create a class as abstract without including any abstract methods. This is useful when you’ve got a class in which it doesn’t make sense to have any abstract methods, and yet you want to prevent any instances of that class.
Sample - 1
abstract class Shape { abstract void draw(); abstract()"); } } //A "factory" that randomly creates shapes: class RandomShapeGenerator { private Random rand = new Random(); public Shape next() { switch(rand.nextInt(3)) { default: case 0: return new Circle(); case 1: return new Square(); case 2: return new Triangle(); } } } public class TestMain {(); } }
Sample - 2
abstract class Actor { abstract void act(); } class HappyActor extends Actor { void act() { System.out.println("HappyActor");} } class SadActor extends Actor { void act() {System.out.println("SadActor");} } class Stage { private Actor actor = new HappyActor(); void change() { actor = new SadActor(); } void performPlay() { actor.act(); } } public class TestMain { public static void main(String[] args) { Stage stage = new Stage(); stage.performPlay(); stage.change(); stage.performPlay(); } }
Interfaces
In previous session,.
Characteristics of Interfaces
A review of methods is in order before continuing. Methods are similar to functions in other languages. A method is a unit of code that is called and returns a value. Methods perform work on the variables and contain executable code. They are always internal to a class and are associated with an object.
The concept of interfaces is one of the main differences in project design between traditional C and Java application development. The C and other procedural programming language systems' development life cycle often begins with the definition of the application function names and their arguments as empty "black boxes." In other words, the programmers know the necessary argument parameters when programming code that calls these functions without knowing how they are implemented in the function.
Thus they can develop code without first fully fleshing out all the functions. In C, this could be done by defining a function prototype for all the functions and then implementing them as resources and schedules permit. How is this accomplished in Java? Through interfaces.
An interface only defines a method's name, return type, and arguments. It does not include executable code or point to a particular method. Think of an interface as a template of structure, not usage.
Interfaces are used to define the structure of a set of methods that will be implemented by classes yet to be designed and coded. In other words, the calling arguments and the return value must conform to the definition in the interface. The compiler checks this conformity. However, the code internal to one method defined by the interface may achieve the intended result in a wildly different way than a method in another class.
The concept of using interfaces is a variation on inheritance used heavily in Java. The chief benefit of interfaces is that many different classes can all implement the same interface. This guarantees that all such classes will implement a few common methods. It also ensures that all the classes will implement these common methods using the same return type, name, and arguments.
Declaring an Interface
An interface is declared in much the same manner as a class, but instead uses the interface keyword instead of the class keyword:
interface name { ... body of interface }
The interface body is declared between the curly braces. The body of an interface consists of declarations for variables and methods.
Modifiers
The same modifiers public and default available to classes can be applied to an interface's declaration. The default is nonpublic, which means accessible by any member of a given package. Most interfaces are public because interfaces are the only means to share variable and method definitions between different packages.
Here is an example of a public interface declaration: public interface AnInterface { ... //body of interface }
Variables and methods declared inside an interface also have modifiers associated with them. However, the modifiers are limited to particular combinations for variables and methods.
Modifiers for variables are limited to one specific set: public static final. In other words, variables declared in interfaces can only function as constants. public static final are the default modifiers. It is not necessary to declare the modifiers explicitly, but it makes the code more self-documenting. Trying to assign other modifiers such as protected results in a compile-time error. Here are examples of variable declarations:
public static final int smtpSocket = 25; public static final float pie = 3.14159; public static final String = "A DATAPOST COMPUTER CENTER";
Modifiers for methods are limited to one specific set as well: public abstract, meaning that methods declared inside an interface can only be abstract. These are the default modifiers for methods. It is not necessary to declare them explicitly, but once again, it makes the code easier to read.
Trying to assign other modifiers, such as protected, results in a compile-time error. Here are example interface method declarations:
public abstract boolean isBlack(Color); public abstract boolean isBlack(String); public abstract StringBuffer promptForName(String);
As you can see in this example, overloaded methods can be declared in an interface just as in a class. An entire interface based on these examples follows:
public interface MyInterface { public static final int smtpSocket = 25; public static final float pie = 3.14159; public static final String = "A DATAPOST COMPUTER CENTER"; public abstract boolean isBlack(Color); public abstract boolean isBlack(String); public abstract StringBuffer promptForName(String); }
Interfaces also can extend other interfaces, just as classes can extend other classes, using the extends keyword. In the following code, the interface AnInterface declares a variable named theAnswer:
public interface AnInterface { public static final int theAnswer = 42; } public interface MyInterface extends AnInterface { public static final int smtpSocket = 25; public static final float pie = 3.14159; public static final String = "The quick brown fox"; public abstract boolean isBlack(Color); public abstract boolean isBlack(String); public abstract StringBuffer promptForName(String); }
The interface MyInterface specifies that it extends AnInterface. This means that any classes that use MyInterface will have access to not only the variables and methods declared in MyInterface, but also those in AnInterface.
You can also list multiple interfaces after the extends keyword. Multiple, possibly disparate, interfaces can be combined into a logical whole if desired, as in the following:
public interface YourInterface extends HerInterface, HisInterface { // body of interface } Using an Interface class name [extends classname ] implements interfacename [, interfacename] { ... body of class }
The implements keyword follows the name of the class (or extends declaration) and in turn is followed by one or more comma-separated interface names. The capability to specify a list of interfaces enables a class to have access to a variety of predefined interfaces.
Sample-1 class Note { private String noteName; private Note(String noteName) {this.noteName = noteName;} public String toString() { return noteName; } public static final Note MIDDLE_C = new Note("Middle C"), C_SHARP = new Note("C Sharp"), B_FLAT = new Note("B Flat"); } TestMain{ //); } } Sample-2 interface CanFight { void fight(); } interface CanSwim { void swim(); } interface CanFly { void fly(); } class ActionCharacter { public void fight() {} } class Hero extends ActionCharacter implements CanFight, CanSwim, CanFly { public void swim() {} public void fly() {} } public class TestMain { } } Sample-3 Extending an interface with inheritance - You can easily add new method declarations to an interface using inheritance, and you can also combine several interfaces into a new TestMain { static void u(Monster b) { b.menace(); } static void v(DangerousMonster d) { d.menace();} static void w(Lethal l) { l.kill(); } public static void main(String[] args) { DangerousMonster barney = new DragonZilla(); u(barney); v(barney); Vampire vlad = new VeryBadVampire(); u(vlad); v(vlad); w(vlad); } } ...
|
https://www.daniweb.com/programming/software-development/threads/191813/abstract-and-interfaces
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
The following form allows you to view linux man pages.
#include <execinfo.h>
int backtrace(void **buffer, int size);
char **backtrace_symbols(void *const *buffer, int size);
void backtrace_symbols_fd(void *const *buffer, int size, int fd); corre-
sponding-
trace mal-
loc mal-
loc(3)ed by the call; on error, NULL is returned.
backtrace(), backtrace_symbols(), and backtrace_symbols_fd() are pro-
vided in glibc since version 2.1.
*.
The program below demonstrates the use of backtrace() and back-
trace);
}
gcc(1), ld(1), dlopen(3), malloc(3)
webmaster@linuxguruz.com
|
http://www.linuxguruz.com/man-pages/backtrace/
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
Hello. Sorry for posting this multiple times, but I have not found a solution to fix this other than manually editing the database.
When I re-image a workstation and rescan it using Spiceworks (incremental disabled and push all data, not deltas true), the server name changes to the new name, but the device name remains the same. It is annoying because I do this multiple times a week, and have to manually change it in the db.
Please help! I love Spiceworks!
33 Replies
May 28, 2010 at 3:35 UTC
akp982 is an IT service provider.
May 28, 2010 at 3:36 UTC
Hi Gareth,
Thanks for creating a new thread for this issue.
I'm not clear on the difference in the two names you are referring to.
If a device with the same serial number has a Windows Computer Name change, you can perform a Rescan in Spiceworks and the new name should automatically update the device with the respective serial number.
Which names are and are not being updated in Spiceworks? Perhaps posting up a screenshot would be helpful.
May 28, 2010 at 3:42 UTC
akp982 is an IT service provider.
walks back into the dark
May 28, 2010 at 3:46 UTC
Sure, here is a screenshot. As I've said, I have done many rescans with incramental scanning disabled, but this problem persists on many computers after pushing a new ghost image to them.
In the screenshot you can see that the server name of the computer has changed correctly, but the name field is old, and this is the name displayed in spiceworks.
May 28, 2010 at 3:53 UTC
Andy, I have tried the steps suggested in that post except for disabling collect events. However, I dont feel like this is really a scan issue because the scans are discovering new software changes for the device in question.
May 28, 2010 at 3:58 UTC
Try performing a Rescan from the Inventory of an individual device with this problem. The Rescan is always more in depth than scheduled scans are (regardless of settings).
Just find the device in the Inventory and click the Rescan link/button. If nothing changes use nslookup to confirm the device's forward/reverse lookups are correct.
- nslookup arbacknurse2182
- nslookup garethf
- nslookup <ip of arbacknurse2182>
- nslookup <ip of garethf, if it still exists>
Is everything else in Spiceworks updating correctly to correspond with the new image on the device (i.e. software, disk usage, etc.)?
May 28, 2010 at 4:11 UTC
I just did a rescan on the individual device, but the name is still old. Just to clarify, arbacknurse2182 was the name of the computer before I re-imaged it. It is now called garethf. However, it is still named arbacknurse2182 in Spiceworks.
All other information is correct (diskusage, last logon, last reboot, software, etc.)
All dns records seem to be correct:
- nslookup arbacknurse2182 - can't find
- nslookup garethf - 172.20.1.69
- nslookup <ip of arbacknurse2182>
- nslookup <ip of garethf, if it still exists> -garethf
Thanks for looking at this with me.
May 28, 2010 at 5:17 UTC
If you delete the device from the Inventory and rescan the network does the device get picked up with the correct name?
May 28, 2010 at 5:34 UTC
Yes, it does. I just wanted to avoid doing this if possible.
Jun 2, 2010 at 3:26 UTC
This approach probably makes the most sense, don't you think?
The device itself has had its OS reinstalled, so other than the hardware it is no longer the same device. Deleting it and allowing it to be rescanned in to the Inventory allows the new device's information to come in clean, and also wipes out the old history from the previous OS installation (like disk usage, etc.).
This would also clean up any related links between the "old" device and Help Desk tickets.
Jun 3, 2010 at 3:53 UTC
this has happened to me finally as it has been fine up until now! 1 computer shows same issue as Gareth States..
It is not a good idea for us to delete the device as we have a need to keep the Ticket History attached to the device regardless of OS reinstall or re-image!
I have tried all the aforementioned steps also and to no avail! again this is the 1st time it has happened as all other times it has updated successfully both the server name and device name! Could this be due to the new SW versions as I am on the newest!??
Jun 3, 2010 at 4:17 UTC
I agree with Kevin, I would rather not have to delete a computer each time i re-image it, but it does solve the problem for now. I would rather keep the notes in there, etc.
Jun 3, 2010 at 4:30 UTC
Gareth - I manually updated the Device Name of the laptop I had this issue with -- just hit Edit and updated the "Name" Field - thus I don't have to delete the device at all..
oddly enough it seemed to forget one of the group exclusions I made for the previous NAMED PC (although other exclusions were still there) to where I just had to re-exclude it from this group that it technically is a part of, I just put it in an unused group while re-imaging for quicker finding of PC's I am rebuilding..
other than that- the manual name change took just fine and kept all old data!
hope this gets solved, I will continue watching these types of threads and updating with my findings as we often re-image pc's
Jun 3, 2010 at 5:50 UTC
Yea I thought about manually updating the name, but I just like everything to be automatic and easy :)
I agree, hopefully this does get solved, because I feel that it is a bug. Thanks for all your help guys.
Jun 4, 2010 at 3:09 UTC
If I can get some more details from you guys we might be able to add something like this in the future. If there is an easy way to identify when a device has been "imaged" or the OS rebuilt, it might not be too complicated to deal with this type of situation.
So both of you would like to keep any historical information (i.e. related entries in the Help Desk that point to the device)?
For example, hardwareA is running with name oldOS. You decide its time to image hardwareA, so you do so and rename the device newOS.
Now in Spiceworks you want this to happen:
- On a future scan Spiceworks identifies hardwareA (by its constant serial number) and notices some OS identifier (see below WMIC) has changed.
- You receive some kind of UI notification "hardwareA has been detected as newOS, and was previously seen as oldOS. Did you rebuild the OS?"
- You say "Yes, I did rebuild the OS." quietly to yourself.
- You now click "Yes" on the onscreen notification.
- Spiceworks proceeds to create an entirely new device for hardwareA, using the new name newOS, whilst keeping oldOS and marking it as "historical" somehow.
Moving forward oldOS is never updated/changed again, but you are able to delete it at any time. newOS permanently becomes the "device" of choice whenever Spiceworks scans hardwareA.
Thoughts?
Could you test this WMIC on a piece of hardware, both before and after "imaging" or rebuilding its OS?
wmic path win32_operatingsystem get installdate
Jun 4, 2010 at 3:27 UTC
Ben.B -this sounds exactly like something that could resolve this issue..
BTW- I happened to image a desktop (DELL) since my last post/issue and it worked fine just as before whereas it updated the Server and Device Name in SW and still everything is tied to the new device name properly!
[The difference this time around was that this desktop (updated perfectly) was imaged through WDS using a premade image for that model..
the laptop previously that didn't update the Device name was imaged from a Dell WinXP Pro SP3 disk and manually installed everything from there..]
However as stated before neither ways of imaging caused this for me before..
Also, I thought that not having the old device anymore would be fine but as you describe having the device there and being referenced in that device that it no longer is existing and has since been reinstalled as "newOS" would be nice for seeing what PC's had installed(software history) and how were configured before re-install.. thus we can compare a before and after install (so devices would need to continue to be linked til "oldOS" got deleted!!
WOW - I am excited for these features already- thanks for the suggestions and for looking into this!
--this weekend I will
1. reimage the laptop that didn't auto-update Device name and rename it to see if it doesn't update a second time!
2. I will then attempt to create a .WIM for it in order to reimage next time through with premade install.. and see if it updates name properly that way!
3. also, each time I reimage before and after I will test the WMIC you listed and track my results..
I will hopefully have these done and post findings by next week.
Jun 8, 2010 at 5:58 UTC
Again, thanks for helping me with this. Ok, this is very strange. I just tried renaming two computers in Windows and rescanning them manually in Spiceworks. Once again, the server name is changing but not the device name (see attachment).
The server name changed to arxpspare1328, which is what I renamed the computer to, but the device is still listed under its old name in Spiceworks. This has nothing to do with a new OS, I just renamed the computer in windows, which we do from time to time.
So how is the 'name' field in the 'devices' table in the db updated? It seems to me that 'server_name' is simply the DNS name, correct?
Thanks,
Gareth
Jun 8, 2010 at 6:39 UTC
hmmm.. good test Gareth - I haven't gotten to my aforementioned testing yet (which Ben.B's suggested update possibilities still look promising though), I have been pretty swamped with DBase restructuring,
but anyways that is a very good point, now we know it is no longer the OS rebuild causing that lack of update to the Device name.
As stated in last post my SW install successfully identified the new name on the next laptop I re-imaged since the issue happened - I will now rename the device and rescan the old device from SW and see if it updates properly as it seems to have reoccurred for you by just simply renaming the PC!
Jun 9, 2010 at 1:58 UTC
Update - this seems to be happening more often for us now..
Server name changing -- now I am being forced to manually change the device name on at least 30 Thin Clients in SW..
(still working on re-imaging tasks but seems this is taking too much effort to update names manually)
Jun 16, 2010 at 9:15 UTC
Let me test this out for you guys - it sounds like we also have a problem updating device names when the Windows name changes.
Maybe this has really been the issue all along?
Jun 16, 2010 at 1:45 UTC
This tested out ok (i.e. both "name" and "servername" updated properly when only the Windows name changed on the device).
However, Spiceworks had quite a few things at its disposal that you guys may or may not have available to Spiceworks on your networks:
- DNS forward record updated name properly
- NetBIOS returns updated name properly
- LDAP query returns AD listing, including updated name
In my case the reverse lookup still had the old name, but this didn't prevent Spiceworks from properly updating the name.
It looks like both of you are having this same problem.
- Gareth, in your case are these true workstations or thin clients like Burn? Based on your description it sounds like they're standard XP devices.
Try testing again, and confirm that the forward DNS record, NetBIOS and LDAP are updated properly.
- Are these devices on your domains?
- Check the DNS forward/reverse lookup zones for the updated records that reflect the device's new name
- Use something to check the NetBIOS name
Lastly, run this wmic to check LDAP:
wmic /namespace:\\root\directory\ldap path ds_computer get ds_cn,ds_DNSHostName,ds_OperatingSystem,ds_OperatingSystemServicePack,ds_OperatingSystemVersion,ds_distinguishedName
Jun 16, 2010 at 3:51 UTC
Thanks for the constant support Ben! I took a while to get to this but here are the tests I have run so far per your suggestion
it is all in the attached photo..
(EDIT) for quick reference so you don't have to open photo..
Imaged laptop with Dell SP3 WinXP disk and installed all drivers…resulting LDAP and install date checks are next
IT-LOANERLT2 CN=IT-LOANERLT2, CN=Computers, DC=UHC, DC=local
it-loanerlt2.UHC.local Windows XP Professional Service Pack 3 5.1 (2600)
--------------------------------Install Date: 20100601160657.000000-420
After name change the above WMIC install date check was the same result…
Here are the results for the LDAP check after the name change
IT-LOANERLT CN=IT-LOANERLT, CN=Computers, DC=UHC, DC=local
it-loanerlt.UHC.local Windows XP Professional Service Pack 3 5.1 (2600)
--------------------NSLOOKUPs
after name change both Forward lookup and Reverse lookup are showing the right info of IT-LoanerLT and 192.168.100.90
---------------------NBLOOKUPs
after name change-Successful NetBios lookup
IT-LoanerLT resolved to 192.168.100.90
Default Server 192.168.100.90
-------------------
After all that worked – the Servername updated in SW but the Device name again stayed the same! (all I did was device name change while still on the Domain!)
Server Name changed on IT-LoanerLT2
Server Name was it-loanerlt2.uhc.local, now it-loanerlt.uhc.local
AHHHHH - still not working!!
NOTE: I accepted that this issue occurred for some thinclients as they aren't full PC's but since for a WinXP SP3 Dell laptop it is reoccurring than something is not working as it should!
I am willing to reimage the laptop and try again from scratch- but since the name didn't update this way I am not sure it matters much that I continue sine the issue is apparent! I will however do so and check all the same info after full reimage!
thanks again
Jun 17, 2010 at 3:07 UTC
Took a look at this with one of the developers (I was able to reproduce the problem last night).
You should be able to correct this issue by finding the incorrectly named/old name of the device in the Spiceworks Inventory and delete the Name from the Name field, then Save.
I will test to confirm that works.
If you have loads of devices with this problem we can take care of this for you by editing all of the affected entries in the database at once.
Thanks for all of your help, guys!
Jun 17, 2010 at 3:16 UTC
that's fine and all but that requires user intervention, defeats the whole purpose of updating info by scanning! - the problem is that it's a 50/50 thing.. I am quite capable of editing the DB and would definitely appreciate SW's help if it became too abundant in how many machine it occurred on, but again it is about half of devices doing this, PC's/laptops/thin clients and shouldn't require manual changing
deleting the name is just the same as manually changing it to what it should be which is what I have been doing here and there for 30 items a week or so...
thanks anyway.. hope this gets fixed in later builds as it is quite a pain!
|
https://community.spiceworks.com/topic/100257-device-name-not-changing
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
true - automatically return a true value when a file is required
package Contemporary::Perl; use strict; use warnings; use true; sub import { strict->import(); warnings->import(); true->import(); }.
true is file-scoped rather than lexically-scoped. Importing it anywhere in a file (e.g. at the top-level or in a nested scope) causes that file to return true, and unimporting it anywhere in a file restores the default behaviour. Redundant imports/unimports are ignored.
Enable the "automatically return true" behaviour for the currently-compiling file. This should typically be invoked from the
import method of a module that loads
true. Code that uses this module solely on behalf of its callers can load
true without importing it e.g.
use true (); # don't import sub import { true->import(); } 1;
But there's nothing stopping a wrapper module also importing
true to obviate its own need to explicitly return a true value:
use true; # both load and import it sub import { true->import(); } # no need to return true
Disable the "automatically return true" behaviour for the currently-compiling file.
None by default.
Because some versions of YAML::XS may interpret the key of
true as a boolean, you may have trouble declaring a dependency on true.pm. You can work around this by declaring a dependency on the package true::VERSION, which has the same version as true.pm.
chocolateboy, <chocolate@cpan.org>
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself, either Perl version 5.10.0 or, at your option, any later version of Perl 5 you may have available.
|
http://search.cpan.org/~chocolate/true-0.18/lib/true.pm
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
Red Hat Bugzilla – Bug 211766
Fails to build on x86_64
Last modified: 2007-11-30 17:11:46 EST
python-reportlab x86_64 build is an older version than the i386 build. Looks like this is from pre-buildsystem time.
Failed due to sitelib<->sitearch, so I syned with FC-4:
Build servers cause trouble currently, so it may be necessary
to requeue this one.
[re-using this bug with changed Summary...]
In short:
Rebuilding the src.rpm on an x86_64 machine fails.
[...]
So, hammer3 hangs, but hammer2 (x86_64 machine) failed to rebuild the
package:
It installed some files into python_sitearch, not python_sitelib, and
then failed to include the installed /usr/lib64/python... files.
I've hacked the FC-3 spec to use a link from /usr/lib64 to /usr/lib
during %install, but that is sub-optimal. Job #20093 succeeded. noarch
package will be in next push.
Keeping this bug open, so you are aware that rebuild failures can
happen when an x86_64 build server is used.
I assume that FC-4 and newer are affected, too.
Thanks for the heads up.
patching setup.py to use this:
from distutils.sysconfig import get_python_lib
package_path = pjoin(get_python_lib(), 'reportlab')
instead of:
package_path = pjoin(package_home(distutils.__dict__), 'site-packages',
'reportlab')
seems to work for me under RHEL4 which is more or less the same as FC3
FC3 and FC4 have now been EOL'd.
Please check the ticket against a current Fedora release, and either adjust the
release number, or close it if appropriate.
Thanks.
Your friendly BZ janitor :-)
|
https://bugzilla.redhat.com/show_bug.cgi?id=211766
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
my code is giving me this error and I can't for the life of me figure out why it's telling me "NameError: name 'Button' is not defined." In Tkinter I thought Button was supposed to add a button?
import Tkinter
gameConsole = Tkinter.Tk()
#code to add widgets will go below
#creates the "number 1" Button
b1 = Button(win,text="One")
gameConsole.wm_title("Console")
gameConsole.mainloop()
A few options available to source the namespace:
from Tkinter import Button Import the specific class.
import Tkinter ->
b1 = Tkinter.Button(win,text="One") Specify the the namespace inline.
from Tkinter import * Imports everything from the module.
|
https://codedump.io/share/OkfjafHtUNIj/1/why-is-my-code-returning-quotname-39button39-is-not-definedquot
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
As the title suggests, I'm working on a site written in python and it makes several calls to the urllib2 module to read websites. I then parse them with BeautifulSoup.
As I have to read 5-10 sites, the page takes a while to load.
I'm just wondering if there's a way to read the sites all at once? Or anytricks to make it faster, like should I close the urllib2.urlopen after each read, or keep it open?
Added: also, if I were to just switch over to php, would that be faster for fetching and Parsi g HTML and XML files from other sites? I just want it to load faster, as opposed to the ~20 seconds it currently takes
I'm rewriting Dumb Guy's code below using modern Python modules like
threading and
Queue.
import threading, urllib2 import Queue urls_to_load = [ '', '', '', '', ] def read_url(url, queue): data = urllib2.urlopen(url).read() print('Fetched %s from %s' % (len(data), url)) queue.put(data) def fetch_parallel(): result = Queue.Queue() threads = [threading.Thread(target=read_url, args = (url,result)) for url in urls_to_load] for t in threads: t.start() for t in threads: t.join() return result def fetch_sequencial(): result = Queue.Queue() for url in urls_to_load: read_url(url,result) return result
Best time for
find_sequencial() is 2s. Best time for
fetch_parallel() is 0.9s.
Also it is incorrect to say
thread is useless in Python because of GIL. This is one of those case when thread is useful in Python because the the threads are blocked on I/O. As you can see in my result the parallel case is 2 times faster.
|
https://codedump.io/share/nERouQOgpm9r/1/python-urllib2urlopen-is-slow-need-a-better-way-to-read-several-urls
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
Speed up your debug routine!
Automated exception search integrated into your IDE
Tired of useless tips?
Exception in JSP: /resources/pages/fileNotFound.jsp:10
7: <!-- Btw this declares html as the default namespace -->
8:
9: <jsp:directive.page
10: <f:subview
11: <html>
12:
13: <head><meta name="WickedWare" content="Mnemonica Application - Due Date Calendar System"/>
Stacktrace:
This site uses cookies, as explained in our cookie policy. If you agree to our use of cookies, please close this message and continue to use this site.
|
https://samebug.io/exceptions/2505936/org.apache.jasper.JasperException/exception-in-jsp-resourcespagesfilenotfoundjsp10-7--?soft=false
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
Agenda:
- What is up
- Param Proposal
- GeoTools 2.5.0 timings
- mvn gdal dependency
Action items:
- Param Proposal accepted
- We need to plan out GeoTools 2.5.0 release candidate this month (after talking to Eclesia)
- Simboss will remove the ascii grid dependency from rendering testing
jgarnett: 0) what is up
aaime|dinner: put
aaime|dinner: (sorry,was trying to run putty on my desktop)
jgarnett: jgarnett - getting ready for several udig training courses; helping out everyone and accomplishing nothing
***groldan param proposal, arcsde dispose, arcsde versioning tunning
jgarnett: aaime_dinner: get
jgarnett: (ftp like IRC but slower)
aaime|dinner is now known as aaime
***aaime random bug fixing waiting for the next paid work shower to hit
simboss left the room (quit: Remote closed the connection).
groldan: nex topic?
aaime: yo
jgarnett: 1) param proposal
jgarnett: groldan / gdavis ...
groldan: hi
groldan:
groldan: that's the proposal page
groldan: to extend the datastore parameters to hold on more metadata
groldan: like if its a password parameter, etc
jgarnett: review: proposal looks great - thanks for putting up with both me and the geotools process (I feel like both have slowed you down groldan). We need to identify the doc pages to change (gdavis you have some already?) and create a Jira issue ...
jgarnett: but mostly I want both of you to be happy and working together.
aaime: I've quickly scanned thru it and I have a question for each alternative
groldan: shoot
aaime: (can I go?)
aaime: For the first proposal, if isPassword is so important that it warrants a convinience accesso method... why not make it a field?
aaime: (and besides, that method is very NPE prone)
jgarnett: good point; why do we need an access method?
groldan: (yeah, do not take into account the NPE'ness of it
groldan: it'll be coded just fine)
groldan: I'm not sure we really need an access method to be honest
jgarnett: also you probably want to have a hint if the parameter is a credential (and should not be stored in plain text); ie do we need to account for user name?
aaime: For the second proposal, trying to apply Parameter<T> everywhere may make that class more rigid and more brittle
jgarnett: let's remove the method then; less code is easier.
aaime: brittle: over time since it'll try to be more things to more client usages, and
mauricio left the room (quit: Remote closed the connection).
aaime: rigid: once you allow a certain kind of usage you'll be stuck with it
groldan: it'll be just a little more code to clients: ((Boolean)getParameterMetadata(Param.IS_PASSWORD)).booleanValue()
jgarnett: gdavis what do you think? Is the <T> helping at all? I cannot really see how it can help myself ... I suspect a lot of times people will just have a List<Parameter<?>> ...
aaime: groldan, null checks
groldan: that's a reason for the utility method so
aaime: you'll need a if in all calling points
aaime: any way to embed that specific extraction code inside the param key itself?
groldan: yet, my first idea was to just make it a boolean field
groldan: but using the map just alings in further moving to the common Parameter class
groldan: right?
gdavis_: we use it for Map<Parameter<type>>
aaime: a map of parameters all of the same type? usefulness of it?
jgarnett: um anything wrong with doing a test for Boolean.TRUE.equals( param.metadata.get( IS_PASSWORD) ) ?
jgarnett: aaime++
aaime: jgarnett, good one
jgarnett: gdavis where do you have that Map<Parameter<type>>; aaime may of spotted a bug for us...
gdavis_: ProcessFactory
gdavis_: public Map<String,Parameter<?>> getParameterInfo();
jgarnett: aaime I would be happy with public final Class<?> type; but it does not bother me too much if people want to document their type up front.... what do you think?
aaime: beware of generics, they might bite
jgarnett: I find that Map<String,Parameter<?>> documents pretty well what is happening.
aaime: ()
gdavis_: yah, I dont see any problem with it...
jgarnett: warning on generics is good; but we are stuck with them somehwere - ie public final Class<?> type vs Parameter<?> ..
aaime: sure there is no problem as long as you use them in simple ways
jgarnett: but yeah; we can lose the Parameter<T> if you want.
jgarnett: okay so can we vote on this now? groldan has been waiting weeks...
jgarnett: aaime do you want me to remove the <T> first?
jgarnett: gdavis and groldan you are both happy? that is what my vote depends on ... getting a jira and listing the documentation pages should be done; but I can trust you guys will do that.
aaime: Hmm... tell you the truth, I don't know
groldan: I am
aaime: my readings around the net seem to suggest generics usage is good if you use them sparingly
aaime: and can turn into a nightmare otherwise
aaime: but I really don't have enough expertise on it to have an opinion of my own
gdavis_: sounds good to me
aaime: let's try with a simple test. Is adding the parameter going to spot some bugs for us or not?
jgarnett: it has made our demo code easier to follow
jgarnett: but does not really help anyone in the "dynamic" case ...
jgarnett: so I like the generic from the perspective of writing user docs
jgarnett: but I understand it does not provide any information that is not already there...
aaime: oh well, I raised my concern, if you don't see any problem with it, just keep the param
jgarnett: +1
groldan: I'm obviously a community +1
simboss n=chatzill@89-97-166-106.ip18.fastwebnet.it entered the room.
groldan: anyone else?
gdavis_: +1 from me
gdavis_: if my vote counts
groldan: it counts as community support, just like mine
groldan: but we need two more psc to decide
aaime: +1
groldan: okay it seems we don't have more pmc around
groldan: simboss are you?
jgarnett: simboss ping? we need you
jgarnett: we may have to drop ianturton; been too long since he has voted.
groldan: I'll update the page with your votes and ask on the ml for more otherwise
groldan: as to keep the meeting going on
jgarnett: thanks
jgarnett: 2) GeoTools 2.5.0 timings
groldan: okay, lets do that, thanks all for your feedback and support
jgarnett: aaime the floor is yours...
aaime: Just trying to look around and see how things are looking for a 2.5.0-rc1 release
aaime: since we'd need to cut 1.7.0-rc1
jgarnett: well I am in a panic
aaime: and I would like to do so before Eclesia hits gt2 with the SE changes (which seem to be massive)
jgarnett: I have a list of technical debt piling up on all sides...
aaime: Hmm... anything new we did not have when we released 2.4.0^
jgarnett: .. I am less woried about Eclesia's changes; but I agree we had one goal for 2.5 - and that was feature model. it is time to cut a release.
simboss: jgarnett
aaime: jgarnett, I am because when it hits, it may destabilize for some time rendering
simboss: fighting with the internet connection
aaime: GeoServer cannot enter RC with that change on the radar
jgarnett: aaime - the feature collection side seems to have gotten more complicated - even since 2.4.0. I wish jdeolive would go ahead with his proposal to clean up feature collection.
groldan: buf, that'd be awesome
aaime: I did not notice any worsening (if worsening was possible at all)
jgarnett: agreed; aaime there is no way we can accept a massive change right now; eclesia would have to wait on a branch until we get to RC status
aaime: yep, but afaik he need to complete that work within one month
jgarnett: I see; well since he is not at a meeting; and is having trouble emailing it is very kind of you to speak for him
aaime: I'd prefer to find a solution that suits all our needs instead of starting a fight
jgarnett: (and helpful for planning)
jgarnett: yep
jgarnett: okay; so we should do two things:
jgarnett: a) set a target at one month
jgarnett: b) check with eclesia about that target; is it good enough for him ...
jgarnett: is a month a good timeline for a RC as far as geoserver is concerned?
groldan: hmmm... two weeks after the code sprint...
aaime: officially, we had to cut 1.7.0-rc1 by the end of this week... unfortunately I'm not able to discuss this with the gs pmc
groldan: or so
aaime: since the gs-devel ml is not receiving mails...
jgarnett: It would be nice from a udig standpoint; I mostly want to test raster symbolizer support and fix up a few feature type building issues - but udig is very happy to be on trunk again.
aaime: jgarnett, the new UI will be on the 2.0.x series
aaime: that's why we'd need to cut the rc before the sprint
jgarnett: aaime; I understand. and to start on that you kind of need 1.7 to fork off prior?
jgarnett: can geoserver 1.7.x and geoserver trunk both track geotools trunk for a little bit?
aaime: Uh?
simboss: guys quick stupid question, where is the proposal page for eclesia's work?
jgarnett: ie is the deadline 1 month for eclesia; and 1 week for geoserver?
jgarnett: he just wrote it recently
ggesquiere n=gesquier@arl13-3-88-169-136-131.fbx.proxad.net entered the room.
aaime: jgannett, yes, it could be possible, thought I'm scared by the possibilities of someone breaking solid some code
jgarnett: it is not complete yet; I think we have all been asking him ...
jgarnett:
aaime: I mean, when you cut a "stable" release you make a clear statement that no major experiments need to go on
simboss: so let me recap, we know that there 1 month deadline, but we don't even have a proposal page for the work? nice!
jgarnett: note I am much happier with that page; then the work that has been happening on geoapi. I think this page represents progress and communication.
aaime: guys, I believe he has a 1 month deadling, I'm not certain
jgarnett: okay; thanks aaime ... and simboss.
jgarnett: I am more worried about geoserver plans right now; since we have aaime here to speak for them.
aaime: jgarnett, yeah, I don't have a solid story here
aaime: because we still lack at least one proposal and one relatively big change in gt2
aaime: the wicket ui proposal for gs
aaime: and the feature speedup code in gt
jgarnett: lets just treat eclesia's work as starting out; and he will talk to us about planning once his proposal page is ready ... a proposal being ready gives us a two week window for planning. And it is not the worst thing in the world if he wants to get organized before talking to us...
jgarnett: okay so we have a solid bit of gt work? ie feature speedup code in gt?
aaime: we yes, we do not want to release gs 1.7.0 much slower than 1.6.x, you know
aaime: but I don't know how far jdeolive is with it
jgarnett: so it sounds like we cannot make a decision today; but we can talk informally ... it is good to know what geoserver needs as part of a 2.5.RC
jgarnett: I talked to jdeolive and he implemented a "fast" feature.
aaime: yeah, that's why I wanted to talk about it, to gather some impressions, opinions
jgarnett: but most of the time was spent in some very slow / correct validation code in SimpleFeatureTypeBuilder; we are going to need to turn that puppy off.
jgarnett: We only put that validation check in there as a sanity check during the transition; it is now time to take off the training wheels.
aaime: anyways, that would be it for me
jgarnett: So I am not sure I expect change proposals for this work; just responsible development.
aaime: I don't see any major negative reaction and I'm releaved... now if only I could organise a talk in gs land...
jgarnett: ie; the api is holding up; we have testing and performance to do.
aaime: jgarnett, correct, no api change, no proposal need
jgarnett: okay; for our planning we will expect a branch / RC in a matter of weeks (depending on how we can serve you and eclesia)
jgarnett: cool. if you are happy we can move on.
aaime: yep
jgarnett: 3) mvn gdal dependency
jgarnett: okay this one is me...
jgarnett: thanks to everyone for taking part in a difficult email thread.
jgarnett: I wanted to ask if the technical issue (ie a dependency in renderer that shoudl not be there) has been resolved?
simboss: to be honest the main technical issue here
simboss: was the lack of knowledge of the topic from the people who started the mailing thread
jgarnett: (going to run mvn dependency:tree in rendering and see for myself)
simboss: since there was/is no gdal dependency
jgarnett: okay so this is a education / communication problem rather than a technical one?
simboss: the only problem, if we really want to find one
simboss: is that the rastersymbolizer work
simboss: uses in its tests
simboss: I repeat, in its tests,
simboss: the asciigrid imagereader from imageio-ext
simboss: to read a small esri ascii grid
aaime: we have two problems afaik, one is that the dependency to arcgrid is not using test scope as it should, the other is that the same dep is a snapshot
simboss: we are going to take two actions
simboss: (yep)
simboss: 1> give the dependency test scope
simboss: daniele already did this
simboss: 2> next week I will spend a few hours on converting the asci grid to a rw file
simboss: so that we don't need to depdend on anything
jgarnett: got it
simboss: but again, I would ask people to check thing before generating confusion
simboss: since this is not the first time that I see
simboss: and I found it
simboss: at least strange
simboss: this minor issue with asciigrid
simboss: has nothing to do with gdal
simboss: or any native dependencies
jgarnett: okay thank you both for updating me on this.
jgarnett: I am sure that I have lept to conclusions myself; so I am not going to be too grumpy.
jgarnett: thanks for taking the time to resolve this; and to eclesia for noticing the problem.
jgarnett: that should be it for the meeting?
aaime: yep
jgarnett: thanks muchy; I will post.
jgarnett: and we ended on time.
groldan: cool, bye all
jgarnett: simboss we missed your vote on the param proposal above.
simboss: sorry about that
jgarnett: if you could look at the page and bug groldan if you have questions? we need your vote to continue.
groldan: simboss
simboss: do you have a quick link
simboss: ah great
groldan: it'd be kind if you could
aaime: jgarnett, I missed a bit there... which alternative did we choose?
groldan: 1 for 2.4.x
groldan: 2 for trunk
aaime: ah
0) what is up
1) update headers
2) DataStoreFactory.Param for passwords
acuster: btw, jgarnett medii decided to contribute under the umbrella of Geomatys
acuster: he decided signing an aggreement was way too much work, the slacker
jgarnett: yeah I saw that on email; thanks.
jgarnett: so agenda is open for another couple minuets; what do people want to talk about today?
acuster: java 7!
***acuster has been watching google tech talks
jgarnett: (I remember some topics from the geoserver meeting being aimed at the geotools code base ...)
jgarnett: but mostly last week was build hell around these parts.
acuster: looks like the File tools are going to be great
acuster: copy/move/symlinks...
jgarnett: .. cannot find links to last weeks IRC meeting for geoserver (sigh)
acuster: shall we get this over with?
jgarnett: yep
jgarnett: 0) what is up
vheurteaux left the room.
***dwins links to last weeks GeoServer meeting log: <dwinslow> Do we need to have GeoServer generate the super-overlays itself, or is it safe to assume we can rely on GWC for creating parent documents that link the proper tiles?
dwins:
acuster: acuster — learning how to code 101, setting up a system for geometries
***dwins learns the difference between shift+insert and middle click
jgarnett: jgarnett - sorting out feature events (I wish Feature actually had a FeatureId rather than a string!), we are down to daily uDig 1.1 releases, and then I have my day job.
vheurteaux n=vheurtea@85.170.87.197 entered the room.
gdavis_: gdavis: still working on WPS module and tweaking the XSD/bing stuff for WPS. With the recent xsd module name fixes I can now work on getting the gt-wps module committed (in unsupported)
jgarnett: Right the idea was to let DataStoreFactory.Param subclass for passwords; groldan did you want to grab a agenda topic for that?
groldan: right
acuster: desruisseaux---waiting for his computer to reach its breakpoint
acuster: the poor man won't leave his aging laptop
groldan has changed the topic to: 0) What's up? 1) update headers 2) DataStoreFactory.Param for passwords
groldan: done
jgarnett: okay lets get this party started....
jgarnett: 1) update headers
jgarnett: I am hoping acuster? Or someone? has good news for us in the form of a script.
acuster: not quite yet
jgarnett: (my hopes are dashed!)
acuster: we are hammering it out, cedric keeps getting more sophisticated and then chases down bugs
acuster: we're trying to keep it more broadly useable
acuster: i.e. in english
acuster: so sometime this week it should chew through metadata and referencing
acuster: I'd like to commit all these scripts /helper classes
acuster: to the svn
acuster: for example in some utility directory in build/ or some such
acuster: any objections to that?
jgarnett: sounds fine; but
acuster: e.g. the svndumpfilter scripts
jgarnett: I don't want to be too distracted getting a perfect script; ie a search and replace in eclipse followed by a manual check would also work.
jgarnett: (ie results matter, and a manual check is still needed right?)
acuster: yes
acuster: we have some more serious constraints on this end
acuster: i.e. Martin wants his name written out just exactly so
acuster: with little flowers all around it
acuster: no, seriously it should be soon
acuster: I'll post it tomorrow and you can hack it in another direction for your needs
acuster: (or at least test what works/doesn't work for you)
acuster: anything else?
desruisseaux: Jody: Search and replace doesn't work.
desruisseaux: The work to done is a little bit more complex than that.
desruisseaux: But Cédric is almost done.
wolf i=woberghe@hoas-fe35dd00-150.dhcp.inet.fi entered the room.
jgarnett: deruisseaux; you are correct; it only does 70% of the work.
jgarnett: or perhaps less?
acuster: we want to get warnings when wierd things are encountered
jgarnett: okay; so in terms of project planning we wanted to get done by now so we could put ourselves up for graduation at the next board meeting right?
acuster: ah, indeed
***acuster goes looking for the script as is
jgarnett: acuster++ yeah I can see how that would be good; it would save some time on the manual check ... but the idea is to do the manual check (updating the headers before the manual check is fine, but the manual check is still needed)
desruisseaux: Jody, it does less than 70% of the work. I wanted the script to take in account the dates in the (C) statements and move the copyright holders as @author when they are not already there.
jgarnett: okay for me I can just stay tuned for email; and we will miss out on graduating this month.
acuster:
jgarnett: I would like to get the core library and plugins sorted; and then start the gradudation process.
acuster: when is the meeting?
jgarnett: I will check; or someone can ask on #osgeo channel
jgarnett:
jgarnett: June 6th
acuster: yeah, not for this month
acuster: first day of summer was our fallback deadline
jgarnett: On a related note I am "mentor" for Deegree and they are going to finish in a couple of weeks.
acuster: poor Cameron who wanted us done in 3months
jgarnett: moving on?
jgarnett: 2) DataStoreFactory.Param
jgarnett: groldan this one is yours...
groldan: hi, take this as background
groldan: the thing is, we need a way to identify a datastore param that holds a password
groldan: so the configuration system can store it encrypted, not show it up on ui's as clear text, etc
groldan: and the less friction path seems to be just to add a "password" field to DataStore.Param
jgarnett: This is the same problem we have in the unsupported/process API; ie tell me some more details about this parameter so I can treat it correctly.
groldan: DataStoreFactory.Param, I mean
jgarnett: groldan can you review the approach here:
groldan: the Map?
jgarnett: I would not mind moving this interface into wider use; and making DataStoreFactorySPI.Param extend it.
jgarnett: combo of Map + document keys
groldan: and what about 2.4.x
acuster: yeah, we should never store passwords, at least not until we have a serious security system in place
jgarnett: And what about 2.4.x? Why not add the Map there; you are breaking API so you may as well break API in a way that is method compatible with "the future" ?
groldan: jgarnett: I don't quite see why the same parameter object shall be used for the process and datastore api
jgarnett: because they are both the same design; ie
jgarnett: lets document what the keys mean when the user interface gives us them.
jgarnett: In both cases we need enough information to make "widgets" on screen to communicate with the user...
jgarnett: Note you can do what you need right now just by "handling" the text and parse and toString methods...
groldan: hmmm.. wonder about loosing cohesion too
groldan: and would you be doing this on 2.4.x too?
jgarnett: DataStoreFactorySpi.Param.text is human readable? DataStoreFactorySpi.Param.parse( text ) converts it to the internal representation. This is what we did for geoserver to "prevent" the need for extra flags describing what the data was for...
groldan: I don't want to do one thiing on the branch and another on trunk
jgarnett: so you have a choice right?
groldan: so you're suggesting parse(text) and text() to take the responsibility of encrypting and decrypting?
jgarnett: a) use parse, text, toString methods (ie stick with the current plan for geoserver)
jgarnett: b) add a field isPassword?
jgarnett: c) add a Map allowing isPassword, isHidden, isASmallFish etc...
jgarnett: you can review if (a) works for you ...
groldan: I don't think they text() and parse() should do so, will we put pluggable cryptography algorithms on it?
jgarnett: if not (given a choice between (b) and (c) ) I ask you to consider (c) in order to match what we are doing for WPS
sfarber left the room.
jgarnett: note you probably want to "encrypt" username and password; so you are not talking a single isPassword flag right?
groldan: hmmm you're increasing the scope of this
groldan: Param already has a lot of fields, most of the time with defaults
groldan: if we go with c)
jgarnett: I am sick of people saying that
I am trying to explore your problem with you ... trying to see a solution that will last us more than a couple minuets ...
groldan: where you put that Param class? un org.geotools.util?
jgarnett: I have had that feedback a couple times now; I don't mind that we are all on deadlines ... but really?
groldan: I understand Jody, I'm wondering if c) is good enough though
jgarnett: groldan you are right; we don't know if (c) is good enough
jgarnett: we did do a bunch of experiments for WPS on this topic
jgarnett: tried out java beans etc...
jgarnett: but to proof is in the pudding and WPS has not shipped yet.
groldan: for example, then we'll have to configure community schema related datastores, which have a lot more configuration than Param can handle
jgarnett: Where is DataAccess.Param now? Can we take it out as a seperate class called Parameter?
groldan: I need something that works both for trunk and 2.4.x
jgarnett: good question ...
jgarnett: For community schema one of your parameters is going to be an rather interesting complete object (say DataMappings) I would hope? But I doubt you will be able to reduce it to a single String anymore. Ie parse and text methods would also fail you?
groldan: exactly
groldan: right now there's a single param poiting to a mappings file
groldan: that could well keep being like this
groldan: or people may start needing more flexibility (you tried java beans already, for example)
jgarnett: so how does this solution not work on 2.4.x? Take DataStoreFactory.Param and rather than (b) add a boolean isCredential field choose (c) add a filed metadata of type Map
groldan: as said last week on the geoserver meeting, I could either go for an extra boolean field or for a metadata Map
groldan: inside DataAccessFactory.Param
jgarnett: note if you add them as methods; isPassword() and isUser() you can be forward compatible with a Map metadata solution.
jgarnett: okay so you will write up a proposal on this then Gabriel?
groldan: and that's all the flexibility I need
groldan: does it need a proposal?
jgarnett: and we can talk about this without wasting IRC time...
jgarnett: it is an API change; with intergration concerns for WPS work
jgarnett: basically we want to reuse any and all user interface code we make based on parameter / conneciton handling.
jgarnett: so you are exposed to a lot of requirements; so proposal all the way..
groldan: I'm really hesitant of spending two more weeks on deciding this
groldan: to be honest
groldan: but ok
jgarnett: so decide; and write up a proposal so we can approve it?
jgarnett: it is not like you can just patch this gabriel; it is an API change .
groldan: understood, really
jgarnett: but I agree if this takes two weeks to decide we are doing it wrong; I am sorry you did not have a proposal for todays meeting (I thought you were on it after last weeks geoserver meeting?)
rraffin n=RomainRa@lns-bzn-51f-62-147-196-37.adsl.proxad.net entered the room.
groldan: me too, apologies about that, too much grief with arcsde
rraffin left the room ("$> wall ciao").
jgarnett: yeah; me 2
jgarnett: that is it for agenda topics ... shall we end early?
groldan: ok for me
jgarnett: I will post the logs...
Agenda:
0) what is up
1) commit access for Lucus
2) switch to JSR-275 for Units support
3) hg repo of trunk
4) header cleanup phase I
5) iso 19108 temporal support
jgarnett: meeting time
jgarnett has changed the topic to: 0) what is up
gdavis_: topic: get commit access for lreed who is helping me with the process related work
jgarnett has changed the topic to: 0) what is up 2) commit access
desruisseaux has changed the topic to: 0) what is up 1) commit access 2) switch from JSR-108 to JSR-275
acuster has changed the topic to: 0) what is up 1) commit access 2) switch from JSR-108 to JSR-275 3) hg repo of trunk
jgarnett: acuster++
acuster:
acuster: is up to date
acuster: all the others are in transition
acuster has changed the topic to: 0) what is up 1) commit access 2) switch from JSR-108 to JSR-275 3) hg repo of trunk 4) Header cleanup, phase I
acuster: shall we?
desruisseaux: Topic 0) Whats up
desruisseaux: Martin: looks like I'm about to finish org.geotools.image.io.mosaic at last.
gdavis_: gdavis: working away at wps module (still not committed)
jgarnett: jgarnett: balancing a couple udig projects (yeah training more people), helping wps work along; and wonder when SE1.1 will hit geoapi mailing list in earnest
acuster: acuster: getting deeper into geometry, lots of open questions there; hg repo looks like it will work
jgarnett: 1) commit access
jgarnett: gdavis this one is yours...
gdavis_: yes
gdavis_: I'd like to get commit access for lreed
gdavis_: who is helping me with the process/wps work
gdavis_: he would be committing to the wps xsd/beans modules
gdavis_: and the wps module
gdavis_: which shouldn't affect anything else
gdavis_: do I need to send any kind of formal email or anything for this?
acuster: surely jody's magical developer's guide has the answer to that one
aaime: Hi
aaime: I believe we don't have anything about this case in the dev guide
jgarnett: I think cameron wrote the first draft of the developers guide; I am hoping for a magical user guide.
jgarnett: we have the "PMC member approval" to start an unsupported module; perhaps the same deal will work for adding a developer to an unsupported module?
aaime: seems sensible to me
aaime: thought it's not the same
aaime: it's the closest we have
gdavis_: so what do I need to do then?
aaime: zzzzz....
acuster: ask jody for the nod,
gdavis_: ok
acuster: get mr reed? to sign the contributor agreement
acuster: ask the refractions admin to make the account
acuster: next? martin?
gdavis_: thnx
desruisseaux: JSR-275
desruisseaux: I can performs the switch this week
acuster: (as you can see we make up the rules as we go along)
desruisseaux: (or next week)
desruisseaux: Can I process?
desruisseaux: It will be incompatible change for anyone using javax.units
desruisseaux: It is actually quite a significant change.
desruisseaux: I guess that those who don't use directly javax.units will not be affected.
desruisseaux: The steps would be
jgarnett: (aside: gdavis can you get the nod from someone other than me, want approval to be clear, if you can also have Lucus ask on the email list we will have a record of the exchange. The big thing is to have read the developers guide and understand how not to break the build)
desruisseaux: 1) Make the change on GeoAPI
desruisseaux: 2) Reflect the change in GeoTools implementation.
desruisseaux: So any objection if I process this week?
jgarnett: Sounds great
desruisseaux: (I will send an email on the mailing list if nobody object on this IRC)
jgarnett: as someone who has done this experiment on a branch I am very confident in the result.
jgarnett: 3) update the user guide
jgarnett: (ie please don't forget documentation...)
desruisseaux: Yes (I always forget the user guide...)
desruisseaux: Objections?
desruisseaux: Counting to 3...
desruisseaux: 1
desruisseaux: 2....
desruisseaux: 3....
desruisseaux: Okay I will send an email.
jgarnett: send the email and we will try and vote on it promptly.
desruisseaux: Thanks
jgarnett: 3) hg repo of trunk
aaime: what changes are we talking about?
desruisseaux: (will do)
aaime: (sorry to be slow)
jgarnett: acuster this sounds like you ...
jgarnett: aaime the JSR-108 units package is old and dead and we are the only project still using it
desruisseaux: Andrea: the switch from the withdrawns JSR-108 to the JSR-275 (its replacement)
jgarnett: JSR-275 is kind of like JScience without the political baggage (and just focused on less scope)
jgarnett: So the definition of Unit and Measure classes.
aaime: Ok
acuster: jsr-275 has both units (the names of the sizes) and other richer structures such as measures( a unit + a value)
jgarnett: It was one of those things that had to wait until Java 5... and it is really nice to work with.
desruisseaux: It is a significant changes for anyone using javax.units. Those who don't use thies classes should not be affected.
jgarnett: (I think it is the only JSR-XXX code contribution I have liked since Java 1.4)
simboss n=chatzill@host204-206-dynamic.36-79-r.retail.telecomitalia.it entered the room.
acuster: simboss do you use javax.units?
simboss: ciao acuster
acuster: going once, twice, ....
simboss: it should be used in some of the gridcoverage plugins
acuster: 3) hg repo: There's a new repo conversion of geotools here:
acuster: hg clone trunk should get it on anyone's disk if they want it
jgarnett: not sure I understand acuster
jgarnett: what is hg
acuster: a similar clone is coming for geoapi
acuster: mercurial
acuster: also you can look at the changes with colored diffs
jgarnett:
acuster: the addresses will be cleaned up when we get the time
jgarnett: so is this a fork? or just a nice way to look at svn?
acuster: every step is a fork
desruisseaux: It is synchronized on GeoTools SVN.
acuster: actually I'm doing it mostly to work on geometry
acuster: I expect I'll break geoapi
jgarnett: you saw that gdavis does not mind if gt-geometry goes back to unsupported...
acuster: so I can have a 'public' geoapi that will not be 2.5-SNAPSHOT
desruisseaux: It is a mirror of GeoTools SVN synchronized every hours which allow Mercurial users to get the sourcE.
acuster: but yeah, that trunk won't get any changes to it
acuster: it's upstream
acuster: just for fun/instruction/learning about distributed revisionning
jgarnett: okay
simboss: I would like to add an item for iso19108
jgarnett: I worry about us starting to fork off as individual teams again
desruisseaux: Just to repeat myself: it is a mirror of Geotools SVN, not a fork.
jgarnett: it was a long hard year when simboss was off on a branch and I don't want to repeate the experience if we can avoid it.
jgarnett has changed the topic to: 0) what is up 1) commit access 2) switch from JSR-108 to JSR-275 3) hg repo of trunk 4) Header cleanup, phase I 5) iso19108
acuster: 4) Header cleanup
acuster: Cedric has a java class that looks for issues and resolves some of them
jgarnett: So can we run a script? and get 90% of the way there ...
acuster: I expect that we'll start on metadata/referencing later this week
acuster: paying pretty close attention to what's going on
acuster: to make sure the script is well behavedd
acuster: I've changed the script header to Geotools – An Open from Geotools – The Open
acuster: I know Jody prefered the rightmost
acuster: apparently James Macgill preferred the less aggressive "an'
acuster: I don't really care much if someone feels strongly
jgarnett: I only care that it is "GeoTools" not Geotools
acuster: oh, good
jgarnett: I think we voted on the tag at some point before SF ate our project history
acuster: I had thought the other was the way
simboss: I think we de-facto "the" toolkit
simboss: we are
acuster: yeah, so no need to claim to be
jgarnett: The logo on our web page says: GeoTools - The open source Java GIS toolkit
acuster: okay, hands up then: Vote is "I like and want to keep "The" "
acuster: acuster: -0
jgarnett: +1 I like it nice and strong and confident
simboss: +1
aaime: +0
desruisseaux: I prefer "an"...
desruisseaux: Well, count me +0
acuster: "The" wins!
desruisseaux: (or 0 without the + actually)
acuster: unopposed
acuster: I'm done, next?
simboss: iso19108
desruisseaux: One of our guy has implemented the majority of interfaces
desruisseaux: (well, I didn't had the time to look at them yet)
desruisseaux: (which is why we didn't proposed to commit them)
desruisseaux: In my mind, ISO 19108 is pretty close to referencing.
simboss: it must be
jgarnett: um I fall asleep when ISO is mentioned; can someone remind me what ISO 19108 is about? Is this the time thing...
desruisseaux: It is not accidental that we groupped them int the "referencing" in GeoAPI.
simboss: temporal
desruisseaux:
simboss: I agree martin, that's why I asked alessio to stop and tell the mailing list
desruisseaux: I can ask Mehdi to commit in some "spike" or "unsupported" directory if peoples wish...
aaime: if that helps collaboration I have no problems with that
simboss: well, we pretty much need this in place, the un/marshalling part can wait a bit
jgarnett: Is there any overlap with Joda time? Or is this our own thing ...
desruisseaux: So you suggest to commit the classes without the jaxb part?
simboss: so alessio can hold a couple of days for this committ and then we can try to coordinate
simboss: well, alessio started to implement this wrapping joda time
simboss: (which was my next question)
simboss: but as I understand you guys hae been using only date-time
simboss: do yuo feel confident about that?
desruisseaux: We want on date/time only for now. It is not a definitive decision. We expect to revisit that when Joda will be bundled in the JDK (which is planed).
desruisseaux: But we though that introducing new dependencies close to the referencing level would be a sensitive issue.
simboss: it is not definitive but it can be a strong decision at least for the moment
simboss: it is going to pass some time before jdk 1.0 goes mainstream for JEE
aaime: jdk 1.0
simboss: ops
desruisseaux: I have not looked at the code yet, so I don't know the amount of overlaps with Joda. If it appears that Data/Calendar/DateFormat can do the work for now, then I'm tempted to try.
simboss: 1-7
desruisseaux: Keeping in mind that we will change anyway.
jgarnett: Joda is being written up as a formal JSR is it not?
desruisseaux: Yes
desruisseaux: JSR-310
desruisseaux: (if my memory serve me right)
simboss: (I'll go have a deep look afterward)
desruisseaux: But class names, packages, etc. are not the same.
aaime: the "trouble" is that Joda is... what, an extra 500kb dep?
desruisseaux: There is also slight API difference.
simboss: well, let's put it this way
jgarnett: Martin so far it is mostly you who perfers waiting for JSRs; I have found java code growth the be crazy in recent years; lost all repsect for them with log4j was skipped.
simboss: if there is a considerable amount of work already done using date-time-calendar
simboss: I think it make sense to leverage on that for the moment
simboss: without reimplement from scratch using joda
simboss: but still I think it might be worth to check anyway
desruisseaux: I'm not sure that it involve that much reimplementation. We need to look at the code
desruisseaux: (I haven't)
desruisseaux: ISO 19108 is mostly about TemporalReferenceSystem
desruisseaux: There is slight overlaps with Joda, but maybe not that much.
desruisseaux: Given that Mehdi has already finished an implementation on top of Calendar, I would like to start with that.
simboss: np
desruisseaux: Then we could looks in that code if there is any stuff not suffisient
desruisseaux: or we can try to quantify the amount of work that could be delegated to Joda.
simboss: np
simboss: alessio and daniele have scheduled some time to work on that anyway
desruisseaux: Okay
simboss: hence if we think that it might be worth moving to joda right away
simboss: we can do that and then you can
simboss: do a code review
desruisseaux: Well, if using Calendar for implementing ISO 19108 implies 20 kb of code, it may be worth to keep this approach compared to add a 500 kb dependencies. But if the amount of duplicated code is more like 200 kb, then this is an other story.
desruisseaux: We will see
desruisseaux: I'm done...
jgarnett: so can we expect a proposal one way or another as agreement / experimentation occurs on the mailing list?
desruisseaux: I will ask Mehdi to write one tomorrow.
jgarnett: thanks muchly
simboss: well, I am pretty much opne to anything here
jgarnett: that is probably it for the meeting; thanks for ending on time everyone.
simboss: so yeah, you have the floor now on this
desruisseaux: thanks
simboss: to trigger the discussion/work
simboss: ah btw
simboss: I finally had some time to look at the equal/hash classes vs HashUtil
simboss: I preferred to remove the two classes
simboss: and to use Utilities
simboss: since the most interesting methods are already there
simboss: one annotation though, should we provide an intial seed for hashcoding simple fields?
simboss: like a nice prime number?
desruisseaux: Usually I try to use a initial seed different for each class.
desruisseaux: (increase the chance that "empty" instances of different classes do not collide)
jgarnett: (sad I thought of another topic)
jgarnett: Is there any interest in figuring out what happened with jaxb?
desruisseaux: I often use the (int) serialversionUID just because my brain is not a good random number generator.
aaime: I believe jdeolive is trying to get rid of jaxb deps from his modules
simboss: does the javadocs state something about this?
aaime: by reusing some of the jaxme classes and putting them in a custom made jar
simboss: jusr for users
simboss: ?
desruisseaux: I'm not sure, but I can add that.
jgarnett: yeah it is an odd thing; it is like we need our dummy jax-b module to be "provided" but then eclipse does not really recognize that and we get conflicts...
desruisseaux: (starting NetBeans...)
jgarnett: As I understand the jaxb classes were used for this same topic (ie understanding dates)
aaime: jgarnett, correct
acuster: what was the deal? he was using jaxb but not using it at the same time?
aaime: parsing and encoding date/time/blah in the format used by XML
aaime: acuster, not for parsing
aaime: not for encoding
aaime: just as a way to get proper date handling afaik
aaime: the actual parsing/encoding is Eclipse XSD + custom code
jgarnett: acuster there was no difference btween the compile and runtime env. in eclipse. So he ended up with both classes on the classpath when parsing... and got tripped up.
***jdeolive is catching up
jdeolive: yes, i am having to remove the jaxb dependencies which is a pretty big pain
jdeolive: but i am left with little choice
acuster: so it was used during compile but can't be used at runtime?
desruisseaux: Just a slight note about hash: I inversed the argument order compared to HashcodeUtil in the hope to make more readable chained call for those who want. Eg: hash(field1, hash(field2, hash(field3, initialSeed))) compared to hash(hash(hash(initialSeed, field1), field2), field3). Well, not sure it is really more readable, but anyway putting the seed last allow us to make it an optional argument...
desruisseaux: ...if we override the hash methods with flavors without seeds.
aaime: acuster, in Eclipse compile/test/runtime classpath are one
simboss: desruisseaux
simboss: that's the approach i was going to take in thos two classes
desruisseaux: I think that Adrian has been able to get Eclipse running the test with some classpath settings in the IDE?
simboss: i.e. using overridden methods with empty seed
simboss: no seed
aaime: desriusseaux, yes, but you have to redo the whole process of fixing the classpath manually after every mvn eclipse:eclipse
jdeolive: yeah that works but its hardly optimal
aaime: change a single dep in a pom and you're screwed
jdeolive: plus you have to do it for every test configuration
desruisseaux: It leads me to an other proposal (maybe longer plan). What about breaking the build in 2 or 3 parts, rather than building the whole GeoTools project in one bunch? It is becoming bigger and bigger...
acuster: yeah, that would suck
desruisseaux: If there is a build focusing on metadata + referencing + epsg plugins for example, it would be easier to hack the build in order to produces different flavors (with / without jaxb, all in one fat jar, skimed versions, etc...)
acuster: still I don't quite get the design that uses a lib to compile and then conflicts with it during the run; sounds very fragile
aaime: desriusseaux, yeah, I've been thinking about that (as well with others)
aaime: yet I fear we'd end up having, among the "parts" or gt2
aaime: the same relationship gs has with gt
aaime: dependencies on snapshots that you have to build on your machine
aaime: because you need this or that fix
jdeolive: i agree with adrian, seperating the buld to get around an issue with one part of the library conflicting with another seems a bad idea
jgarnett: martin your ideas have merit; but I was not going to worry about it until after we graduate (too many moving parts this month)
acuster: with religiously regular releases (RRR) it might be workable
desruisseaux: Yes Jody, I think your approach is safer.
jgarnett: we still need to bundle up metadata+referencing+geoapi+epsg-hsql as a single download and see if the result is worth the effort.
aaime: acuster, for the specific gs case, we can probably depend on a stable gt2 release for one week
jgarnett: acuster++ RRR would be great.
aaime: hardly longer than that
aaime: even with monthly releases
desruisseaux: Hudson may be of some help.
desruisseaux: It can deploy.
acuster: right but that's the top of the stack no?
acuster: metadata has been rock solid for a long time now
acuster: referencing issues seem to be calming down
acuster: now that there are all the right 'my axis is before yours' hooks in place
jgarnett: Yes and no; referencing was solid until it got jaxb dependencies; refering needs to be cleaned up at the factory spi level before I can show it to others ... there is always work to be done.
aaime: we're having quite some issues with multithreaded access
jgarnett: but your point is a good one; we should be able to "deploy" parts of our stack seperatly
aaime: people using GeoServer in anger are being hit
jgarnett: aaime my code from last year fixes it; but I need some of martin's time to debug one bit of functionality (lookup by id)
jgarnett: and now I have no customer to pay for it; so it is listed as technical debt.
aaime: jgarnett, yes, but I understand your changes were deep?
jgarnett: if we roll out something based on my classes backed on to h2 we can test with out impacting others
jgarnett: my changes were
jgarnett: a) renaming things so I could tell what they were for
jgarnett: b) setting up object pools to handle the concurreny load
aaime: jgarnett, if I find a way to stop sleeping during the night and work isntead I'll have a look
jgarnett: c) looking at the caching code and stress testing
jgarnett: understood aaime
acuster: that code landed but is not being used?
jgarnett: aaime you made an h2 port before right?
aaime: (sorry, I'm really that busy
)
jgarnett: if you can do it again I can just change which super class it extends
jgarnett: and we can try out my infastructure...
aaime: jgarnett, you don't get how busy TOPP people are these days
jgarnett: aaime i know you are that busy; but when you want to solve it give me a coupl days notice - I am that busy too
aaime: Ok ok
aaime: well, time to get some sleep for me
jgarnett: thanks for the chat.
aaime: I need to get up at 6am tomorrow
acuster: hmm, that seems less like bug fixing and more like a whole new functionality
aaime: acuster, right, but that seems to be necessary in order to make the code scale on very high concurrent loads?
acuster: ok
aaime: (I don't know, I'm trusting what other people say and the issue we're seeing in 1.6.x where Jody's code has not landed)
acuster: is there a bug report on all this? It's the first I hear of these issues
aaime: yes
aaime:
acuster: and I'm not sure they invalidate the idea of having a stable referencing layer released separate from the feature level layer
jgarnett: acuster; you are right that is why I was paid for 4 months to do the work ...
jgarnett: martin gave me lots of careful reviews; and my code is done.
aaime: I really have to go
jgarnett: I have one bug with ReferencedObject lookup or something.
jgarnett: -
***acuster fades as well
jgarnett: night
jgarnett: has anyone posted the logs ...
aaime left the room.
jalmeida n=jalmeida@189.62.58.87 entered the room.
jgarnett: suppose that is me then...
Weekly IRC 2008 May 19
- what is up
- we're moving to mvn 2.0.9
- Graduation work: headers, review.txt
#"Fix Versions" in JIRA
. desruisseaux has changed the topic to: Weekly IRC 0) what is up 1) we're moving to mvn 2.0.9 2) Graduation work: headers, review.txt 3) "Fix Versions" in JIRA
<desruisseaux> Shall we start?
<aaime> sure
<desruisseaux> 0) What is up
. aaime bug fixing for the releases, running CITE tests, and making the releases
<desruisseaux> Martin: MosaicImageReader again (working on the special case where the grid is regular - no need for an RTree in this case)
<acuster> acuster — starting on isogeometry module; looking into Hg -> subversion pathways
. groldan is hacking hard on ArcSDE, implementing per connection command queue
<acuster> chorner, dwins elizard ggesquiere hbullen mgrant pramsey vheurteaux ?
<pramsey> ?
<vheurteaux> yep ! hello
<desruisseaux> What is up
<vheurteaux> boring things non GT related
<acuster> weekly IRC meeting if you all want to pitch in
<acuster> going twice....
<desruisseaux> 1) We are moving to mvn 2.0.9
<acuster> we have cedric's patch waiting
<acuster> I'm planning to apply it tomorrow so I don't have to think about it anymore
<acuster> so everyone should be using maven 2.0.9 now
<acuster> that's all
<groldan> does it mean the build won't work anymore with < 2.0.9?
<groldan> ah ok
<desruisseaux> It is likely to continue to work
<acuster> not sure
<desruisseaux> (I means likely to continue to work with 2.0.8)
<acuster> but 2.0.9 will be our default
<desruisseaux> But using 2.0.9 would be more determinist.
<groldan> okay, cool
<desruisseaux> 2) Graduation work
<acuster> We are ready for the final push
<acuster> module maintainers will be responsible to update all headers and the review.txt files
<acuster> also I'd like to review where the data in sample data came from
<acuster> and get data with overlapping data with non-identical projections in the shapfile group
<acuster> sorry
<groldan> yeah, I got rid of all sample data in sde some time ago just because I didn't remember where it came from
<acuster> and get overlapping data with non-identical projections in the shapfile group
<acuster> for the modules of you three what's a reasonable deadline for this work?
<aaime> Eh, officially I'm the maintainer of postis-versioning only
<aaime> and can help on rendering and jdbc
<aaime> but that's it I guess
<acuster> okay, the plan is to push a bunch out now and use that to pressure the slower folk
<acuster> that's all I guess
<groldan> at one time I thought the review was meant to be done by someone else than the module maintainer?
<groldan> but I might be wrong
<acuster> ooo, I like the idea
<acuster> okay, so we'll revisit this once we get some modules done
<acuster> next?
<groldan> sure
<groldan> ?
<groldan> desruisseaux: I guess floor is yours?
<desruisseaux> JIRA
<desruisseaux> More precisely "Fix version" in JIRA description.
<desruisseaux> Can we let it to "unknown" when the reality is that we don't know?
<groldan> so your concern on the ml makes sense
<groldan> and yes, we seem not to have a policy about that?
<desruisseaux> A while ago the policy was to always set a fix versions on the basis that task without fix version fall in a black hole.
<desruisseaux> Maybe it is true for task without assignee.
<groldan> I wonder how much more work that would be for someone like andrea, who gets almost every issue reported assigned to him
<groldan> but that's in geoserver
<groldan> in geotools things may be a bit different
<aaime> From where I stand they are exactly the same
<aaime> I have so many issues assigned to myself
<aaime> that everything not exactly in the next release jira will be completely ignored
<aaime> black hole effect
<aaime> Stuff in the next release will be at least looked at, not necessarily fixed, mind that
<groldan> what I can say is that it is already hard to go thru the next version list and decide what one can actually achieve
<desruisseaux> Not sure if we are allowed to have a "maintainer-by-maintainer" policy. But on my side, I look from time to time to the list of tasks I'm assigned too. So even if they are not scheduled for every releases, I see them.
<desruisseaux> I also tend to look at every tasks reported against metadata and referencing modules, so tasks for those one do not fall in black hole neither.
<aaime> I don't see a problem with a "maintainer to maintainer" approach
<desruisseaux> May take a long while however (unfortunatly...)
<aaime> Yeah, for me it's different, my main responsibility is to keep GeoServer going forward
<aaime> anything that's not 100% in that topic basically does not exist
. simboss_ is now known as simboss
<aaime> also, I'm not maintainer of anything besides postgis versioning
<aaime> but in fact I keep up other modules whose main maintainer is missing or is someone else
<desruisseaux> Then, if nobody object, I will try to setup more accurate "fix versions" for metadata and referencing. It may implies many "unknown" fix versions. I would suggest to keep them as is if nobody object...
<groldan> right, you do, and its worring there are such a number of almost-orphaned modules
<aaime> I surely won't complain
<aaime> each maintainer is free to manage the issues of his modules as he sees fits imho
<desruisseaux> Thanks. I'm done then...
<groldan> and what about the main ones
<groldan> which have various maintainers
<aaime> groldan, same
<groldan> same thing?
<groldan> I see
<aaime> too many maintainers, no maitaner
<groldan> so in the end it would be just a mean of being more picky, just like what you were asking about issue reporting aaime
<groldan> ie, just do care
<aaime> Eh, yeah, the thing is that the number of issues vs the number of developers is simply overwhelming
<aaime> while the devs are already filled solid of other work
<aaime> so basically stuff that's moving is stuff that some way or the other has some commercial sponsoring behind it
<aaime> like it or not...
<groldan> right
<aaime> desruisseaux, stupid question
<desruisseaux> Andrea, yes?
<aaime> do you think it would be possible to make streaming renderer participate in the go-1 architecture?
<desruisseaux> I can ask to Johan
<aaime> you have pluggable renderers, right?
<desruisseaux> Yes
<desruisseaux> There is a "Renderer" interface for that.
<desruisseaux> Johan has separated "Canvas" and "Renderer" exactly for the purpose of plugin different rendereR.
<aaime> and "renderer" is something that can draw anything, like decoration, north arrow, map scale?
<desruisseaux> Renderer is basically a collection of implementation-dependant Graphic (this is the GO-1 design), and a Graphic can draw anything like a decoration, north arrow, map scale.
<desruisseaux> Canvas take care of referencing stuff (which is renderer-independant, so can be leverage accross different renderer).
<aaime> so it may be sitting in geographic space, or in "paper" space
<desruisseaux> One or the other.
<aaime> Ok, interesting
<aaime> thanks for answering
<desruisseaux> The renderer give the opportunity to draw in "objective" or "display" space (again the "objective" and "display" are GO-1 terminology)
<desruisseaux> As Graphic choice.
<desruisseaux> (typo: At Graphic choice)
<acuster> we done?
. acuster decides we are and goes to post the logs
<desruisseaux> Yes on my side.
<aaime> done
The GeoTools 2.5-M2 milestone release is available for download.
The GeoTools 2.5 series has been updated to use Java 5, and has undergone many improvements. This release is centered around making a new Feature model available. The new feature model is based on formal GeoAPI interfaces and allows the library to work complex data structures.
Since this is the first time the Feature model is being made available to the public this release should be considered alpha. Both the GeoServer and uDig projects have successfully migrated, so you are in good company.)
Weekly IRC meeting: 5 may 2008
0) what is up
1) svn cleanup
2) 2.5-M2
3) inccubation
4) FeatureCollection aggregated functions
5) backport paging to 2.4.x
<jgarnett> okay lets go ... although I had hoped to hear from simboss...
<jgarnett> 0) what is up
ggesquiere (n=gesquier@arl13-3-88-169-136-131.fbx.proxad.net) has joined #geotools
<acuster> acuster — cleaning svn; looking into mercurial
<desruisseaux> Martin: MosaicImageReader and postgrid.
<jgarnett> jgarnett - looking at Process discussion from last week and updating the codebase, trying to rememeber what was going on w/ ArcSDE now that I can run tests, annoying Eclesia with geoapi feedback when it sounds like he is on a deadline.
<Eclesia> jsorel : removed some swing widget that will not be supported in the futur
<groldan> groldan: adding tests to geoserver wms, quite stuck on static stuff
<acuster> no deadline, merely a distinct urge to make progress
<acuster> anyone else?
<jgarnett> simboss ping
<jgarnett> 1) svn cleanup
<jgarnett> acuster and/or martin ?
<acuster> afabiani, Awp_ chorner dwins elizard ggesquiere hbullen jdeolive mgrant ?
aaime (n=aaime@host97-45-dynamic.3-87-r.retail.telecomitalia.it) has joined #geotools
<acuster> me
<afabiani> hi
<Awp_> it wasn't me
aaime has changed the topic to: 0) what is up 1) svn cleanup 2) 2.5-M2 3) inccubation 4) FeatureCollection aggregated functions
<acuster> any of you want to give us a quick what is up?
<acuster> going once...
<acuster> okay gone
<acuster> SVN cleanup
<acuster> hmm, I even wrote up a page on what I'm doing
<acuster> in summary
<acuster> we start with the repo, use svnadmin to get a dump
<acuster> run that through svndumpfilter a bunch of times
<acuster> that gets rid of 1) most of udig 2) big files 3) lots of stuff we don't care about, e.g. .cvsignore files
<acuster> we go from 3.0GB to 1.4GB or so
<jgarnett> (link to page?)
<acuster> then we run the dump through a java class to pick up all the files which were duplicates
<acuster>
<acuster> all the files which were added as duplicates
<acuster> only we don't do anything if they were added in the same commit
<acuster> because fixing that would be hard and error prone
<acuster> anyhow,
<acuster> we are good to go
<acuster> we need from you all (1) a go ahead (2) a date or date range where we can do this work
<acuster> how does this week work for everyone?
<desruisseaux> Fine for me.
<aaime> yap
<groldan> for how long it will mean having no svn?
<jgarnett> how does it work with refractions sys admin?
<acuster> groldan, I'm guessing 24 hours probably less
<groldan> cool
<acuster> jgarnett, I need to clear it with them when I get a sense from the gt community what works here
<acuster> I have no idea of GS or uDig deadlines in the near future
<acuster> also I need to coordinate with you jody to work out the permission stuff
<acuster> okay, people seem willing to do it. I'll contact refractions and get a date from them, then send out a confirm email
<acuster> if it causes a time conflict for anyone at that point, yell loudly
<aaime> Hem... deadlines... groldan, what was the planned release date for gs 1.6.4?
<acuster> I'd like to do it wed/thrusday
<aaime> end of this week, or end of next week?
<groldan> I'm not really sure, guess end of week?
<groldan> this one I guess
<aaime> acuster, are you planning to do the work during the week or during weekend?
<acuster> week
<acuster> since it depends on refractions
<aaime> Hmm.... ok, then we cannot cut 2.4.3 on Friday I guess
<jgarnett> acuster I am away for the weekend+monday; refractions sys admin can also do permissions if needed.
<acuster> ok
<acuster> aaime, groldan is this week a bad idea for you?
<acuster> is next week better?
<groldan> hmmm I thought I could afford a day without gt svn, not sure about andrea
<aaime> ah, me too
<aaime> the problem is the timed gs release
<aaime> next week is probably going to work better for us, yes
<aaime> I'm asking our release manager just to make sure
<aaime> (crazy times at gs)
<groldan> acuster: waiting till next week would be killer for you?
<acuster> okay, I will now aim to take svn down for 24 hours the 13,14, or 15th
<acuster> nope
<acuster> glad to work around your schedule
<acuster> that's all, next.
<jgarnett> 2) 2.5-M2
<groldan> that's reliefing so, thanks
<jgarnett> I tried to release this last week; and got stuck on some work simboss was doing; I would like to try again this week ...
<jgarnett> is there anything else that is going to hold me up?
<groldan> yes
<jgarnett> (the goal here is to make a milestone release so some uDig developers can evaulate switching to trunk...)
<groldan> I need to add unit tests for the paging + querycapabilities stuff
<jgarnett> okay; can you ping me when done?
<groldan> planning to do that tomorrow though
<acuster> why does that block a release?
<jgarnett> I don't really mind why (my guess is gabriel does not trust the code until unit tests are in place?)
<groldan> not sure if it should, because there might be a couple regressions/new bugs we don't know about
<acuster> if jody can wait, it's all good
<acuster> the uDig folk are chomping at the bit though
<jgarnett> It is only a milestone release; if it "works" then it is worthwhile me making "SDK" releases available for uDig developers (so they can migrate to trunk)
<groldan> anyway, if jody is planning to do it this week I can put a hardline myself of tomorrow
<acuster> great
<jgarnett> moving on ...
<groldan> okay, I would preffer you wait for it so
<groldan> since udig uses a lot of postgis
<groldan> which's what I touched
<jgarnett> understood; also that functionality would make a great tableview :-P
<jgarnett> 3) incubbation
<jgarnett> no progress to report; I see some (c) headers have been updated ...
<acuster> do we need to schedule a big push for that?
<jgarnett> this is really in the hands of module maintainers right now....
<jgarnett> we do
<jgarnett> a search and replace would be a good start.
<acuster> that sound scarily lazy
<acuster> desruisseaux, how are your modules on the review.txt front?
<acuster> anyone else know where they stand on their modules?
groldan has changed the topic to: 0) what is up 1) svn cleanup 2) 2.5-M2 3) inccubation 4) FeatureCollection aggregated functions 5) backport paging to 2.4.x
<desruisseaux> I don't think that anything changed since the review.txt files has been wrote.
<desruisseaux> (I means, it seems to me that the pool of developper in metadata, referencing and coverage has not changed)
<acuster> which means what? are you ready for graduation?
<desruisseaux> If graduation == changing headers, yes.
<acuster> that's pretty much all that left
<jgarnett> graduation == changing headers & review of code
<acuster> we can smell the OSGeo on the horizon
<acuster> anyone else?
<acuster> can we give ourself a deadline?
<acuster> ourselves
<jgarnett> I have not looked at any of my modules recently; a deadline would be fine.
<acuster> end of the month?
<acuster> June 21st (i.e. summer)?
<desruisseaux> Maybe: target end of the month, and see at that point where we are?
<jgarnett> let's go for summer; the smart thing would be to look when the next OSGeo meeting is .... and plan to finish our requirements two weeks previously; to give the iccubation committee a chance to review our work.
ggesquiere_ (n=gilles@arl13-3-88-169-136-131.fbx.proxad.net) has joined #geotools
<jgarnett> Let's try and write up our incubation stuff up at the end of the month; it will be a good test of where we are.
<acuster> good
<jgarnett> 4) FeatureCollection aggregation functions
<jgarnett> aaime ?
<aaime> Yes
<aaime> summary: feature collection has some aggregate functions
<aaime> like disticnt/min/max/average and so on
<aaime> they are implemented as fc visitors on the standard fc
<aaime> but for the jdbc fc, no, they are encoded as sql
<aaime> unfortunately that encoding is done once for all db in a way that breaks postgis (among other)
<aaime> and that breaks if the expression passed as an argument is not a simple PropertyName
<aaime> I need to fix it
<jgarnett> thinking ...
<jgarnett> can the different datastores; list PostGIS
ggesquier (n=gesquier@arl13-3-88-169-136-131.fbx.proxad.net) has joined #geotools
<jgarnett> implement their own FC? I think PostGIS already does ...
<aaime> it's empty and unused
<aaime> (but yes, it's there)
<aaime> moreover, that would not solve the problem of encoding an expression like "attribute + 10"
<jgarnett> ah; it origionally was the one that had the visitor stuff encoded as SQL
<jgarnett> and it was pulled up and shared.
<aaime> My point is that we can keep it shared
<jgarnett> okay
<aaime> we alrady have factored out ds differences in SqlBuilder/FilterToSql
<aaime> but sqlbuilder does not expose that functinality in a useful way
ggesquiere_ has quit (Client Quit)
<aaime> I would just need a encode(Expression) method in SQLBuilder and I could implement it the right way
<aaime> so that the col names are escaped according to the db needs and
<aaime> so that complex expressions are encoded properly as well
<aaime> but to do so, I need to add a method to an interface
<desruisseaux> I need to go before the rain... Bye all!
desruisseaux has quit ("ChatZilla 0.9.81 Firefox 2.0.0.14/2008042015")
<aaime> Now, SQLBuilder is one of those interfaces that are public
<aaime> but not really "published"
<aaime> it's not like our end user docs speak about them
<aaime> so I was wondering, can I add the encode(Expression) method to the interface
<aaime> without going thru a proposal?
<aaime> (and I need it for the 2.4.x series)
aaime listens to this resounding silence
<acuster> you could probably persuade jody if you promised him some fodder for the user docs
<aaime> fodder?
what's that?
<acuster> stuff
<acuster> food technically
<aaime>
<acuster> alegorically, material
<acuster> cannon fodder ---soldiers to get killed by cannons
<jgarnett> thinking ...
<jgarnett> aaime; this JDBC stuff is a contract between you and the subclasses of JDBCDataStore
<jgarnett> it is not really user facing code is it?
<aaime> no
<jgarnett> then personally I don't care too much.
<aaime> it's not meant to be, yet of course the interface of SQLbuilder is public
<aaime> So it's ok for me to add that method?
<aaime> +1 for me
<aaime> jgarnett, jdeolive? vote?
<jdeolive> +0 I have not followed this issue
<acuster> sounds good
<simboss> +1
<jgarnett> +1
<acuster> in your email, you spoke of not having time to do the right thing.
<jgarnett> aaime; there is another way - volunteer to be the jdbc module maintainer
<acuster> if that's true, any chance you could lay out what that would be for future reference
<aaime> acuster, this way I can do it right
<acuster> great
<aaime> there were other "right" options that were more expensive than this one
<aaime> jgarnett, not ready to give up the last bit of my spare time
<jgarnett> (hey I gotta try)
<jgarnett> moving on ...
<jgarnett> 5) backporting paging to 2.4.x
<jgarnett> Now this one is user facing groldan
<groldan> more jdeolive's actually
<groldan> but anyways
<jgarnett> so tell me why we are considering this? (hint it involves you being paid and 2.5.x being a long way away)
<acuster> lol
<groldan> sort of
<jdeolive> jgarnett: yes
<acuster> do we have a sense of where the OGC is with their various proposals on this issue?
Eclesia bye ++
<jdeolive> jgarnett: and its only adding api... which we have allowed in teh past on a stable branch
Eclesia (n=sorel@mtd203.teledetection.fr) has left #Geotools
<aaime> acuster, OGC side stepped the issue
<jgarnett> I have tired to limit us to adding functionality (additional plug-ins etc...) and not api in the past.
<jgarnett> but your point is taken.
jdeolive laughs when people talk abotu enforcing backwards compatability in geotools
<jdeolive> but your point is taken as well
<jgarnett> 2.4.x does not have an user docs; but I would like to ask that we behave a bit better once 2.5.x series goes out.
<jgarnett> so how about this gabriel; you do the work; and you write a nice note about it in the release notes; explaining that we are very bad etc...
<groldan> I can do the work and add some doco, won't say we're bad though
<jgarnett> lol
<jgarnett> well we are at least treading in a gray area
<jgarnett> possibly at night.
<acuster> the puritan vs the latin
<acuster> groldan, you have what you need?
<acuster> are we done?
<groldan> I have what I need, if that means constantly moving on a gray area
ggesquiere has quit (No route to host)
<groldan> okay thanks for the exception on behalf of the ones doing this paid work
<acuster> money talks, eh?
<groldan> and there are three minutes left for the end of the meeting yet
<groldan> acuster: that you already know
<groldan> that's how geotools goes forward I guess
<groldan> </meeting>?
<acuster> indeed
0) what is up 1) paging 2) Hello to SoC H2 student
18 <aaime> Hi?
19 <aaime> zero topic meeting?
19 * jdeolive loves those kinds of meetings
20 <aaime> lol
20 *** jdeolive sets the channel topic to "Weekly IRC: 0) what is up 1) configuration".
20 <dwins> i've asked the student I'm mentoring to stop by and say hello this week, should I tell him to come back when people are around?
20 <dwins> (for gsoc)
20 <aaime> shouldn't we talk/vote about paging?
20 <aaime> groldan?
20 <jdeolive> oops
20 * jdeolive forgot what day it is
20 *** jdeolive sets the channel topic to "Weekly IRC: 0) what is up".
20 <groldan> hi, got distracted hacking paging
20 <aaime> jdeolive: ha ha
20 <groldan> yeah, add paging as topic please
21 *** aaime sets the channel topic to "Weekly IRC: 0) what is up 1) paging".
21 <aaime> anything else? we're already 20 minutes deep into the meeting hour
22 <aaime> acuster, groldan, jdeolive, jgarnett
22 <aaime> anything else?
22 <groldan> not here
22 <jdeolive> not sure if jody is around today... he may be stuck moving
22 <aaime> aah right
22 --> simboss has joined this channel (n=chatzill@host201-203-dynamic.27-79-r.retail.telecomitalia.it).
23 --> wohnout has joined this channel (n=wohnout@kolej-mk-60.zcu.cz).
24 <dwins> aaime: can you add 'say hello to the SoC student working on H2' as a topic as well?
25 *** aaime sets the channel topic to "Weekly IRC: 0) what is up 1) paging 2) Hello to SoC H2 student".
26 <aaime> who's running the meeting?
26 <groldan> I thought you
26 <aaime> eh, I knew I would end up doing it
26 <aaime> Ok
26 <aaime> 0) What's up
27 * aaime flying low by at a high speed with DuckHawk project
27 * groldan hacking on paging
28 <simboss> doing nothing
28 <acuster> acuster — writing a file to clean the SVN dump
28 * jdeolive is working on rewriting component wms spec
29 * groldan does not understand what jdeolive is doing
29 <jdeolive> the spec as written is confusing and ambigious... so ogc is getting topp to make it so that it is not
30 <groldan> cool, thanks for the clarification
30 <jdeolive> np
30 <groldan> next topic?
30 <aaime> 1) paging
31 <groldan> okay, I was hoping to ask for a vote
31 <groldan> and was going to test postgis but the meeting time catched me up
31 *** aaime is now known as gt-meetbot.
31 <jdeolive> HAHA
31 <groldan> yet, I'm leaving QueryCapabilities.getFilterCapabilities() out of the picture by now, since its being an unexpected scope increase
32 <gt-meetbot> groldan, it was something requested by Jody as a condition to let the proposal go afaik
32 <groldan> do we have enough PMC around for a vote? guess not, but does anybody read the proposal?
32 <gt-meetbot> I wrote most of it... does it count as reading?
32 <groldan> thanks gt-meetbot, yeah, it was about having a default implementation with everything set to false
33 <simboss> groldan can I ask a quick question?
33 <groldan> sure
33 <simboss> do you mention
33 *** gt-meetbot is now known as gt-clown.
33 <groldan> gt-meetbot: that certainly count, I wish I have more bots that write them
33 <simboss> I have playing a little bit lately
33 <simboss> with ebXML registries
33 <simboss> where paging is part of the spec
33 <groldan> having a lot of xml fun so
34 <simboss> (using JAXR
)
34 <jdeolive> i am weary of introducing new api...
34 <jdeolive> it is indeed scope increase
34 <jdeolive> not saying its a bad idea
34 <groldan> so you have some recommendation about paging?
34 <jdeolive> but it might be good to seperate out
34 <simboss> paging there is implemented using offset and maxelement
34 <groldan> how thar relates to CSW 2.0 paging? the same?
34 <simboss> I quickly scanned through the proposal
34 <simboss> of feature paging
35 <simboss> and there was no mention of maxelements/maxresult/numofelementshit or whatever you want to call it
35 <gt-clown> zzzzz
35 <groldan> maxFeatures?
35 <gt-clown> that was the idea
36 <gt-clown> keep on using maxFeatures, it's already there
36 <simboss> k then I scanned it too quickly
36 <simboss> thx
36 <groldan> we're reusing Query.maxFeatures for that concept
36 <simboss> question answered
36 <gt-clown> it may well be non mentioned at all (it's already in the query api)
36 <groldan> and I'm more about to call offset startIndex
36 <groldan> and let offset/limit as jdbc jargon
36 <simboss> startIndex is better IMHO
37 <gt-clown> maybe startFeature
37 <gt-clown> startFeatureIndex
37 <gt-clown> (to better match maxFeatures?)
37 <gt-clown> (offset/limit --> sql jargon)
37 <jdeolive> gt-clown: did we figure out what wfs 2.0 calls it?
37 <groldan> may be, though everybody seemed to worry about having a name reflecting some existing spec
37 *** gt-clown is now known as aaime.
38 <aaime> They don't call it at all
38 <aaime> results are treated like a linked list of pages
38 * jdeolive is going to start calling andrea "changling"
38 <aaime> Odo please
38 <jdeolive> haha
38 <aaime> you specify maxFeatures, the feature collection returned has a link to the next page
38 <jdeolive> i see
38 <jdeolive> alrighty
38 <groldan> yeah, they don't use it, just assume you start at the fist "page" and then follow server generated "links"
38 <groldan> no random access to pages
39 <jdeolive> i see
39 <aaime> (we can always add that as our custom vendor option for GET requests, people would love us if we do)
40 <groldan> and wfs 5.0 will include it
40 <jdeolive> anyways, to sum up, i am +1, no real preference on startIndex vs startFeatureIndex, but -1 on QueryCapabilities for now
40 <jdeolive> I think it should have its own proposal
40 <groldan> what do you do to know if the sorting will work?
41 <groldan> just catch up an exception?
41 <jdeolive> hope and pray
41 <jdeolive> the same thing we do with everything else in teh datastore api
41 <jdeolive> dont get me wrong
41 <jdeolive> i like the idea of QueryCaps... just think its a larger issue
41 <jdeolive> and dont want it to hold up paging
41 <groldan> it is if we include FilterCaps, which I'm leaving out so far
41 <aaime> remember it was not there to start with
41 <aaime> Jody asked for it
41 <aaime> in order to let the proposal get a +1
43 <jdeolive> right, and what i am saying is we should tell Jody to start a seperate proposal
43 <aaime> (in my first version of the proposal caps were not there)
43 <groldan> okay, in the mean time I'll make sure I get an end to end test ready
43 <aaime> sure, asking does not cost much
43 <groldan> and since we're in trunk, we can go with a minimal caps (ie, without filter)
44 <aaime> but given he warnted us it's probably better to ask before starting to commit
44 <groldan> and let depuring QueryCaps be another proposal
44 --> gdavis_ has joined this channel (n=gdavis@mail.refractions.net).
44 <groldan> jdeolive: would that work for you?
44 <jgarnett> hello - I am with a customer today (so am missing most of this meeting)
44 <jdeolive> groldan: yup, works for me
44 <groldan> hi Jody
44 <groldan> quick recap for you
45 <groldan> talking about paging
45 <groldan> we want to get minimal with QueryCapabilities
45 <groldan> ie, FilterCapabilities is being too much an scope increase
45 <groldan> but as we're on trunk we could start minimal
45 <groldan> and let QueryCaps mature as another propoasl
45 <groldan> sal
46 <jgarnett> bleck
46 * jdeolive misunderstood, his suggestion was to forgot QueryCaps all together for now
46 <groldan> that gives jdeolive's +1, otherwise he's -1
46 <jgarnett> I understand that QueryCaps is a larger scope
46 <jgarnett> however I would like to see either that; or someone taking the time to fix the existing datastores.
46 <jdeolive> if people really want QueryCaps right now i could be persuaded... but it seems unnecessary to me at the moment
46 <jgarnett> (ie I don't want to see datastores continue to suck; I was offering QueryCaps as a compromise)
47 <groldan> jdeolive: the gain I see is we won't need to change code later even if QueryCaps gets more methods
47 <jgarnett> I would be happier with doing the work to all the datastores; ...
47 <jdeolive> jgarnett: its a big changes... involves changing datastore implementations and client code
47 <jdeolive> it should not be taken lightly
48 <jgarnett> understood; so what change costs less?
48 <jdeolive> groldan: fair enough... so what does a minimal QueryCaps cover?
48 <jdeolive> just paging?
48 <jgarnett> FilterCaps? or fixing the DataStores?
48 <jgarnett> what I don't want is "half of paging"
49 <jgarnett> (ie the alternative presented earlier, Query2 with a FeatureSource2 is the least impact way of supporting Paging, ie new functionality = new method, new interface just like with GetObjectById)
49 <groldan> jdeolive: a minimal QueryCaps to cover isOffsetSupported() and supportsSorting(SortBy[])
49 <jgarnett> jdeolive; how would you do "just paging"?
49 <jdeolive> jgarnett: sorry, but its a reality of our datastore api at the moment, and its been worked around, you start chagning impelemtnations, work arounds stop working
50 <jdeolive> jgarnett: ok, i could go for that
50 <jdeolive> sorry, i mean groldan: i can go for that
51 <jgarnett> jdeolive; I got a few minuets to talk - we went over this on a couple of geoserver meetings. I don't mind what is done so long as it is complete - I am sick of half done work (like this StyleVisitor changes).
51 <jgarnett> groldan++ I can go for that too.
51 <groldan> cool, that's what I can too with the given timeframe
51 <jdeolive> jgarnett: i know, and i agree
51 <jdeolive> but drastic changes done work
51 <jdeolive> dont work
51 <jgarnett> yep.
51 <jdeolive> they lead to headaches
51 <jdeolive> its a big job... that is all i am saying
52 <jdeolive> putting it as a blocker to doing paging is a bad idea imho
52 <jdeolive> i am bnot saying it should not be done
52 <jgarnett> so what did we wend up with here; GetCaps with a single method isOffsetSupported() ?
52 <jgarnett> (looking for the proposal link...)
52 <groldan> paging implies sorting
53 <groldan>
53 <groldan> so I mean isOffsetSupported(); and boolean supportsSorting(SortBy[]);
53 <jgarnett> makes sense.
53 <jgarnett> groldan are you going to update this page now? and call a vote during the meeting .... or are we expecting an email.
54 <groldan> nope, rigth away
54 <aaime> +1 for paging + limited caps
55 <jdeolive> +1
55 <jgarnett> +1 (you are updating the page now then?)
57 <groldan> done
57 <groldan> gonna add the votes now, thanks all
57 <jgarnett> QueryCapabilities as an abstract class please?
57 <simboss> +0
58 <jgarnett> for even as a final class; it is only a datastructure with fixed intent...
58 <groldan> not an interface
59 <jgarnett> sorry groldan; I am not sure when FilterCaps turned into an interface (it is in the proposal page)
59 <groldan> hmmm filtercaps is an interface afaik
59 <groldan> and I understood you wanted the same trick than for Query, ResourceInfo, etc
00 <jgarnett> I thought it was going to be a final class, or at least an abstract class - so that we can add more methods to it over time.
00 <jgarnett> (and not break everyone)
00 <aaime> he did want to avoid the same mistake as Query afaik
00 <groldan> yet, I was wondering if there's overlap in intent between ResourceInfo and QueryCaps
00 <jgarnett> not so much
00 <jgarnett> resourceInfo is about the data
01 <groldan> I see
01 <jgarnett> QueryCaps is about the FeatureSource API and how much you can trust it (ie interacting with the data)
01 <jgarnett> we got only a few more minuets; can we move on to the next agenda topic...
01 <jgarnett> or do you need more groldan?
02 <groldan> I'm gonna change to QueryCaps as an abstract class with no setters
02 <groldan> if that's what you meant
02 <jgarnett> 2) Hello to SoC H2 student
03 <wohnout> Hello that's me
03 <jgarnett> welcome!
03 * dwins lets wohnout do the talking
03 <wohnout> I have project to make spatial index to H2 database
04 <groldan> cool, welcome, btw
04 <wohnout> It should be great if it will be done
04 <wohnout> and I hope that I will make it
04 <groldan> so do you have a plan?
05 <aaime> wohnout, did you notice people were talking about your project in the H2 mailing list?
06 <wohnout> aaime: yes. I write there some info about me
06 * dwins was conveniently cc'ed on that thread
06 <wohnout> groldan: not so detailed but I have some ideas. I was talking to dwins and jdeolive today.
06 <jgarnett> I have a silly question; where is this work going to take place? As a geotools unsupported module? (and if so can i ask that we bring this one up to supported status before the end of the summer)
07 <aaime> jgarnett, I think we should've stated that as a condition before the student accepted
07 <aaime> raising the bar after the fact does not seem fair
09 <groldan> but what's the intent, being in geotools code base or another place?
09 <jgarnett> that was more my question, where is the work going to live.
09 <groldan> and if the former, we should help him as much as possible to, even if not getting to supported status, as close as possible
10 <groldan> (ie, by providing code reviews etc)
10 <-- DruidSmith has left this server (Read error: 113 (No route to host)).
11 <aaime> wohnout, do you understand what we're talking about?
11 <wohnout> I'm new to geotools ocde so I don't know how to answer
11 * dwins gets the impression that summer of code projects often have their own project on code.google.com
11 <wohnout> *code
11 <aaime> dwins, if they are stand alone, yes
12 <aaime> if this one heavily depends on gt2 api it's going to be easier to have it as an unsupported module in gt2
12 <-- gdavis_ has left this server (Remote closed the connection).
13 <dwins> sure
13 <dwins> but shouldn't gt2 use the database rather than the other way around?
13 <wohnout> I was looking on H2 module in gt2 and I have feeling that is not maintained by anybody
13 <jgarnett> well met wohnout; I gota run but it has been great to "meet" you online.
14 <jgarnett> jdeolive maintains the H2 module; but has not brought it over to be part of the standard "plugins" for geotools.
14 <wohnout> same to you
14 <aaime> wohnout, jdeolive here is the maintainer of that module
14 <aaime> wohnout, so the plan is to work on h2 codebase to provide a better spatial index
14 <aaime> and not to try and leverage the one alrady there?
15 <dwins> we have been discussing how the spatial index in h2 doesn't work well for non-point data
15 <aaime> if so, can I suggest a good book?
15 <aaime> "spatial databases" by Rigaux, Scholl, Voisard
15 <groldan> indeed
15 <aaime> has a few chapters on spatial indexes
16 <groldan> (though I may have read just 1/6 of it)
16 <wohnout> I think that this should be something above h2...
16 <aaime> groldan, me too
16 <groldan> but you had too much to do to finish it, I was not understanding
17 <aaime> well, indexes usually are integrated with the db for good reasons
17 <aaime> thought of course integrating requires more work
17 <aaime> ah, sqlite rencetly got a spatial extension
17 <aaime> you may want to have a look at it
18 <aaime>
18 <wohnout> ok will take a look on it
19 <aaime> (not sure it has a spatial index, but people were speaking highly of it at a recent conference)
20 <groldan> so that's it? go with our best wishes wohnout.
20 <groldan> should we call for a wrap, or do you want to say anything else?
21 <wohnout> thanks
21 <wohnout> no that's all
21 <groldan> okay, welcome aboard again
21 <groldan> bye all, bed time here
22 <aaime> Who's posting the logs?
23 <groldan> okay, that can be my good action of the day, I'll post them
23 * aaime loves groldan
Summary:
0) what is up
1) SoC
2) Process
3) Maven 2.0.9
4) proposals
jgarnett: meeting time?
gdavis: yes
jgarnett: (if I can ask someone else to run the meeting; I am too stupid / slow today)
jgarnett: ...
jgarnett: agenda items
Eclesia: process proposal
jgarnett has changed the topic to: 0) what is up 1) SoC 2) Process
jgarnett: please change the topic to add an agenda item...
desruisseaux has changed the topic to: 0) what is up 1) SoC 2) Process 3) Maven 2.0.9
jgarnett: Martin are you content to talk about Range via email? Or do we need to say anything .. I would like to write up a page for the user guide when we have it sorted out.
desruisseaux: Jody, sire
desruisseaux: (sure)
jgarnett: okay, I was expecting andrea to talk about his many proposals last week. At the very least they have all been voted on by now ...
jgarnett: lets go
jgarnett: 0) what is up
desruisseaux: I'm still working on NumberRange, trying to get it working with generic. But it would be more "weak" generic than what we previously has (a little bit like java.util.Collection which have a contains(Object) method, not contains(E))
jgarnett: jgarnett - installing arcsde and oracle and battling version hell between them
gdavis: gdavis: finishing up the process module and starting the gt-wps module, which is much like the wms module but for wps requests
jgarnett: (aside: martin I know how you can do that one better; but you may need to sacraface some of your constructors to get there...) Subject for after meeting ..
desruisseaux: (Jody: no need to sacrifice constructors - the problem is not there).
desruisseaux: (I means, I deprecate constructors, but they can be removed in next releases rather than now)
jgarnett: aaime did you have anything for the agenda? we are already moving ....
desruisseaux: No, new NumberRange<Integer>( 1, 1 ) can work.
jgarnett: 1) SoC
ggesquiere [n=gilles@ALyon-152-1-142-149.w86-209.abo.wanadoo.fr] è entrato nella stanza.
jgarnett: Deadline is passed; we will get students some time this week (I think...)?
jgarnett: So we will need all hands on deck answering the user list etc....
jgarnett: that is about all - thanks to Mentor's for volunteering; and to the many applicants - a lot of interesting ideas all around.
jgarnett: anyone else ...
jgarnett: 2) Process
jgarnett: gdavis ...
gdavis: ok regarding process module, we need to sort out the following concerns: process as beans, runnable, using ISO 19111 parameter as a base for ours, removing the hint attributes from the parameter class.
jgarnett: I think I can do this quickly:
jgarnett: - process parameters as beans; good idea - we tried it and it does not work a) internationalization support is terrible b) metadata such as FeatureType or CRS cannot be communicated
aaime: hum....
jgarnett: - Runnable can be considered; but it goes against the idea of a ProcessListener
jgarnett: - using ISO 19111 parameter was terrible the one time we tried it for a dynamic service; I never want to see that again.
aaime: can you elaborate at bit on each one?
Eclesia: I dont understand your agument against runnable
jgarnett: I think a lot of this discussion has been going on via IRC; we need to send emails to the list now and again...
desruisseaux: Jody, I never understood what didn't worked for you regarding ISO 19111 parameter. Could you post on email (or on a wiki page) an example please?
jgarnett: -
desruisseaux: I don't understand neither the argument against Runnable.
jgarnett: shows how to use a progress listener
sfarber: simboss, you want to prod more PMC into voting on the RS prop?
jgarnett: the trick here is that the listener is provied by the code that actually runs the thing
jgarnett: if we seperate that ( with a setProgressListener) we needless open up the door to mistakes and deadlock.
sfarber: (oops, thought we were in agenda-gathering phase. I'll shut up now)
jgarnett: Eclesia; it is a coding style choice - not a technical limitation.
desruisseaux: Jody, why? It is common to have "addFooListener" methods in worker classes (e.g. ImageReader.addImageProgressListener(...))
desruisseaux: After all, more than one listeners may be interrested to known about the progress...
jgarnett: martin; about ISO19111 parameters; I spent two months with you sorting it out; and we made a WMS GridCoverageExchange that used it. But the result was so crazy that we could not build a user interface based on the provided ParameterGroupDescriptor and ParameterGroup objects.
aaime: desriusseaux, yes, it's the swing style, and it needs special handilng to avoid memory leaks
aaime: (example:)
jgarnett: martin you are correct; that is common; the style of programming I showed for ProgressListener is as much a bout "flow control" and the ability to interrupt as it is about reporting progress.
desruisseaux: So why not a Process.addProgressListener(...) method? (using WeakReference or not at Process implementor choice)?
jgarnett: Basically I want the code that "starts" the Thread to also provide the ProgressListener. Giving the responsibility to anyone else is a mistake ...
jgarnett: the easiest API way to enforce this is by using a process( ProgressListener control); method
jgarnett: so yes martin; the approach you describe works; using ProgressListener as a parameter works as well - and forces the programmer starting the Thread to take responsibility (ie the goal here is to assign responsibility to the correct programmer)
desruisseaux: Jody, I disagree. A Swing widget may be interrested to known about the progress but may not want to start it itself. The progress would be rendered in some widget in Swing threads. The process could be started by any other threads.
desruisseaux: Jody, there is my use case:
jgarnett: I understand; however I don't agree - it is the interupt case that gets me.
acuster [n=acuster@rab34-2-82-230-29-3.fbx.proxad.net] è entrato nella stanza.
jgarnett: please continue martin ...
desruisseaux: 1) A process is to be run in some background thread.
desruisseaux: 2) We want to monitor progress in a Swing widget.
pramsey ha abbandonato la stanza (quit: ).
desruisseaux: 3) We want to give the process to a java.util.concurrent.Executor, which will start the process in some threads that it control itself
desruisseaux: (i.e. using a ThreadPool)
desruisseaux: So we create the Process
desruisseaux: Swing register itself as a listener
jgarnett: okay so I understand your use case; and being a control freak of an API designer I want to ensure that the code that is handling all the threads is forced to know what is going on with respect to monitoring and canceling the thread.
desruisseaux: We submit the Process to the Executor, which will start it later as it see fit.
jgarnett: So if you had a "ProcessManager" model; the ProcessManagerView may very well use a swing based progress listener to do the work; but I want to force the programmer to keep there head up and make that judgement call.
aaime: desruisseax, you could get what you want by creating a wrapper around the process that turns it into a Runnable
aaime: jgarnett, having a single listener seems limiting thought
jgarnett: I had not thought explicitly about java.util.concurrent.Executor; I would be surprised if they did not have some of these facilities already.
jgarnett: aaime you are correct; the limitation is on purpose.
aaime: why?
desruisseaux: My goal is to get a Process framework fitting nicely in java.util.concurrency
jgarnett: (note all of this is a set of tradeoffs; I don't mind negotiating on this ... )
jgarnett: I am drawing a hard line here to make sure I am understood.
jgarnett: martin that was not listed as one of the goals for this work; it may be a useful goal - do you have some time to devote towards it?
aaime: I do understand the memory leak issue, I've been caught more than once in it with non trivial listener setups
desruisseaux: I can work with Eclesia
jgarnett: one of my goals is to intergrate with workflow engines and eclipse jobs for example ...
aaime: but I don't understand the single listener limit
desruisseaux: Memory leek seems a separated issue to me.
aaime: desriusseaux, given how common they are with swing, I'd say it's a design mistake
aaime: (see also)
***Eclesia agree with aaime
Eclesia: a normal listener system would be better than a monitor
jgarnett: aaime; the single listener limit is just a choice; between limiting code to a single listener (you can always write a ProgressDispatchListener) and avoiding setProgressListener ....
jgarnett: so let me ask Eclesia; you understand the ProgressListener approach now? And you would simply like to use more than one ProgressListener?
aaime: jgarnett, that limitation could of course be removed using a dispatcher but it's annoying... why can't you accept a "Listerer..." param
jgarnett: I also should ask if what I am trying to do here is too subtle? is my point not comming across.
desruisseaux: A paranoiac code would need to invokes getProgressListener before setProgressListener, and creates come CompoundProgressListener if the former returned a non-null value. A addProgressListener(...) method is simplier.
Eclesia: yes, i would like more, I was kind of "foreced" to used monitor
Eclesia: forced*
jgarnett: yes; that was my goal
jgarnett: to force you to use monitor; and thus respect the workflow engine or user.
jgarnett: the other aspect to this is that what we have here is not a normal listener; the idea is to decompose the listener with sublisteners as code is delegates to other code.
desruisseaux: Jody what you call Monitor looks like this interface, or do I'm wrong?
desruisseaux:
Eclesia: handling a listener list is not a hard thing...
jgarnett: yes
desruisseaux: Basically I strongly feel that one of the top priority of a process framework would be to fit nicely in the java.util.concurrent package.
jgarnett: and it was not the point.
jgarnett: new Runnable()
jgarnett: moving on one of my priorities is to make sure that I can fit into a workflow engine with some form of job control.
jgarnett: as such I need some kind of callback object.
jgarnett: Is the fact that this is called "Listener" the source of confusion?
desruisseaux: Jody, a wrapper add conceptual weight... More concept to handle for the developper rather than the one he is used to == more complexity...
jgarnett: you are correct
jgarnett: however a wrapper is almost always required
desruisseaux: Why?
jgarnett: see SwingWorker for just such a wrapper
jgarnett: Runnable is also a wrapper; you are not often making those directly ...
jgarnett: because code requires parameters; flow control; and output
desruisseaux: If it is in order to get the result of some computation, java.util.concurrent.Callable do just that.
jgarnett: Runnable usually depends on object fields; or final parameters for the input; and sideeffects for the output.
desruisseaux: Java.util.concurrent gives you everything needed for getting an output object.
desruisseaux: Callable is like Runnable but returns a value.
jgarnett: martin we are missing something
jgarnett: how do I stop a runnable
desruisseaux: Future.get() can be invoked in any thread, wait for Callable to finish his works and returns that value.
jgarnett: that is all I want to know.
desruisseaux: Jody, you stop a runnable with Future.cancel() !!
desruisseaux:
jgarnett: Note Callable.call() is another example of a wrapper that is similar to Runnable; ie similar problem.
jgarnett: good now we are making progress
desruisseaux: Callable doesn't need to be a wrapper. Any developper can implements it directly if he wish
gdavis: if it has a cancel feature, im fine with using runnable
jgarnett: it does
jgarnett: martin can we make something that reports Progress using an extention of Future? or is that already done somewhere ...
desruisseaux: We could extends Future
desruisseaux: I have no objection to that.
jgarnett: Approach of Future is exactly the same approach as that used by ProgressListener; code that sets up the work is responsible for taking responsibility about its flow control.
jgarnett: I notice that SwingWorker now implements Future
jgarnett: so that may infact show a good approach.
aaime: but it keeps separate control and notification
aaime: you need to listeners for progress
aaime: (progress tracking)
jgarnett: shows how to do Future and a Swing Progress bar together.
desruisseaux: We submit a Callable (or Runnable) object to an Executor, which will start the process in whatever thread it wish. Executor gives us in return a Future object that we can use for cancelling or getting the result.
aaime: would that fit in the Eclipse process control mechanism?
aaime: (with GeoServer I'm hands free, I can use whatever java API)
jgarnett: we still need a bit of glue code; for when the progress & future is delegated to other code (does anyone know what I am talking about here?)
jgarnett: aaime eclipse Job is very simple (on the level of Runnable + ProgressMonitor)
jgarnett: so we can make it work with whatever we end up with
jgarnett: I am more concerned about some of the process engines; I am not aware of anyone that uses Future yet.
desruisseaux: I use it.
desruisseaux: MosaicImageWriter use it in order to write many tiles in parallel.
sfarber: I used it too, to do pipelining so as to avoid the geoserver overload bug. It was pretty easy to get ahold of, and sort-of 'just worked' for the rather complex threading tasks that were put in front of it.
desruisseaux: I'm considering to use it in MosaicImageReader as well (reads many tiles in parallel) but this is more tricky.
jgarnett: still focus is on the Java programmer making processes; I would much rather only the workflow engine writer has to be smart (cause they only need to do it once)
simboss: hi guys, sorry I am late
jgarnett: martin and saul; do you use ProgressListener; or any other glue code in geoapi / geotools ?
aaime: sfarber, what was that? "geoserver oveload bug"? Never heard of it...
sfarber: It's the bug you've just found with DRobinson.
sfarber: (I'm pretty sure)
desruisseaux: In the case of MosaicImageWriter, I use ImageWriteListener (forgot the exact name) since it is the listener I'm supposed to use. But the principle is close to identicaL.
jgarnett: okay
aaime: (sfarber, sorry, where is the patch for that??
)
jgarnett: so martin we will look at this; and revise the proposal if we can make it work.
sfarber: jgarnett: no I haven't used the geotools 'progressListener' class before. But I've used other versions of the same concept.
jgarnett: I don't see any downchecks right now.
jgarnett: as for beans vs iso parameters; we are pretty stuck on that stuff - they both don't meet our requirements.
desruisseaux: 10 minutes left to the meeting...
desruisseaux: Well, I suggest to focus on one topic at time.
jgarnett: yeah lets move on; take the rest to email - good discussion however. Should we call a breakout IRC on this? because we gotta get moving...
desruisseaux: Could we start with Future, and revisit parameter later?
gdavis: well i cant move forward without some decisions on all these things...
jgarnett: yes; what we have now works for parameter.
jgarnett: so we can proceed with what we have now.
gdavis: ok
jgarnett: 3) maven 2.0.9
jgarnett: martin over to you?
gdavis: have we pretty much decided to use runnable then?
desruisseaux: I would said that the decision for now is to try to fit into java.util.concurrent for everything except parameters.
desruisseaux: java.util.concurrent said nothing about parameters, so we can revisit this issue later.
desruisseaux: So yes, lets try to use either Runnable or Callable (at your choice)
gdavis: ok
acuster: what are the processes for?
desruisseaux:
acuster: i.e. how complex?
desruisseaux: Maven 2.0.9
Eclesia: acuster:
simboss: guys I would ask some feedback on the RS proposal before we close the meeting since it has been floating around for a while now
desruisseaux: Cédric tried a build and got no problem
jgarnett: grab an agenda item?
Eclesia: ther are exemples at the beginning of the page
jgarnett ha scelto come argomento: 0) what is up 1) SoC 2) Process 3) Maven 2.0.9 4) proposals
simboss: thx
desruisseaux: Any objection if we put Maven 2.0.9 as a requirement?
desruisseaux: Doing so (after removing the plugins version that are now declared by Maven) produces an upgrate of a bunch of plugins.
desruisseaux: We are using some olds plugins versions.
desruisseaux: Letting Maven select the plugins versions may help us to get the upgrate them from time to time.
acuster: I don't object but I'd like to stop maven creep
desruisseaux: (typo: to update them)
acuster: i'm tired of everyone being on different mavens and not knowing why stuff is broken
DruidSmith ha abbandonato la stanza (quit: Read error: 110 (Connection timed out)).
desruisseaux: (for info, Andrian may be refering to aggregated javadoc not working when building with Java 5. It work with Java 6 only and nobody known why)
aaime: on 2.4.x it seems to works with java5 (that's how I build the aggregated jdocs for 2.4.x releases)
desruisseaux: It was used to work before, but Adrian has spent one days trying to get it on trunk, and the only way that seems to work is with Java 6...
desruisseaux: Nevertheless, my hope is that nailing down the pluging to the versions "recommanded" by Maven releases starting at Maven 2.0.9 may help to make the build a little bit less chaotic.
desruisseaux: (this is just a hope though)
desruisseaux: By chaotic I means things that was used to work and doesn't work anymore like javadoc...
desruisseaux: Any opinions?
sfarber: desruisseaux and acuster: can you try your changes before requiring this?
desruisseaux: Yes we tried.
simboss: btw, as a side note maven >= 2.0.8 has a few interesting bug fixes for doing offline builds
sfarber: I mean, just make the local mods to the pom.xml files, run it with maven 2.0.9 and see if it works!
desruisseaux: Yes we tries and it worked.
simboss: when snapshots are involved
desruisseaux: (typo we tried).
sfarber: ok, so the data point is:
jpfiset ha abbandonato la stanza.
sfarber: * maven mvn >=2.0.9 + martin and acuster's changes = working javadocs and building
acuster: nope, not so lucky
sfarber: I think that's clearer than "my hope is that nailing down the pluging to the versions "recommanded" by Maven releases starting at Maven 2.0.9 may help to make the build a little bit less chaotic."
desruisseaux: by "it worked" I means the build is successful, but aggregated javadoc is still working only with java 6.
desruisseaux: However I feel it as a step in the good direction.
aaime: hmmm... what improvements do we get (solid ones, things that did not work before and work now, or things that work faster)?
desruisseaux: None I can see right now.
aaime: (since experience is showing that we find out issues only when everybody switches)
desruisseaux: Simplier pom.xml.
acuster: what version of maven is cannonical today?
***acuster bets there are many
acuster: I was on 2.0.8
aaime: me too
simboss: same here
aaime: what worries me is not really switching to 2.0.9, it's changing the pom
desruisseaux: Oh I forgot! One possible benefit.
desruisseaux: After the Maven plugin upgrates, we got warning telling us that some of our modules will not build anymore in a future Maven version.
aaime: erk
desruisseaux: We got the warning after the plugin upgrate, not the maven upgrate itself.
jgarnett: back
desruisseaux: The warnings tell us what need to be reviewed and maybe modified in our pom.xml.
jgarnett: I just updated from maven 2.0.5 to 2.0.8 in the last month.
desruisseaux: So having too old plugins is maybe not a good idea.
acuster: martin can we split this out?
desruisseaux: May be easier to upgrates the plugins a little bit more frequently, and relying on Maven 2.0.9 and future version for that may make it easier.
acuster: ask for concesus on the move to 2.0.9
acuster: then ask for what the feeling is for changing the pom.xml?
acuster: and when downstream users have time to test builds
desruisseaux: Fine for me.
desruisseaux: So the proposal it: ask peoples to move to maven 2.0.9 now but do not update the pom.xml yet
aaime: Martin, maybe create a patch, a jira issue, and then ask people to try it out
desruisseaux: And update the pom.xml a little bit later?
desruisseaux: Okay.
aaime: works for me
acuster: anyone object to moving to 2.0.9?
aaime: (thought without a changed pom no-one will really forced to move to 2.0.9 I guess)
acuster: going, going ...
jgarnett: thinking
jgarnett: I am willing to try it
The [i=3f5eed82@gateway/web/ajax/mibbit.com/x-7300a1f6eebe1f03] è entrato nella stanza.
jgarnett: can we do another thing when we try on the weekend; it to avoid downtime?
jgarnett: (or is everyone feeling lucky)
The ha abbandonato la stanza (quit: Client Quit).
aaime: Do we really have a maven version that people have to use? Afaik we just have a recommendation in the release guide?
DruidSmith [n=DruidSmi@pool-71-181-172-190.sctnpa.east.verizon.net] è entrato nella stanza.
jgarnett: occasionally our build just does not work; if you do not go with the maven version in the developers guide
jgarnett: (ie I just want to list what is tested and working)
desruisseaux: We will create a JIRA task tomorrow, ask volunter to try and maybe edit the pom.xml in a weekend if we got the permission to then.
desruisseaux: I'm done.
aaime: cool
acuster: good
acuster: btw,
acuster: I changed some javadoc stuff today
jgarnett: okay
jgarnett: meeting time is up
jgarnett: but I really wanted to hear from simboss and aaime about the proposal storm last week
acuster: as in using references to javadoc 5 not 1.4
jgarnett: do either of you want to take the floor?
acuster: just a heads up for anyone that notices things changed
aaime: jgarnett, sorry, my sql proposal is dead in the water implementation wise
simboss: jgarnett: just a reminder about the RS proposal
jgarnett: your sql proposal?
aaime: I'm on a paid project for the next 2 weeks
aaime: sld proposal, sorry
jgarnett: okay
Eclesia: simboss : do you follow SE1.1 or a mix between SLD1.0 and SE ?
simboss: SE 1.1 and SLD 1.0
simboss: are basically the same thing
jgarnett: We checked SE1.1 as part of the review of his proposal.
simboss: as far as rasterSymbolizer is involved
jgarnett: simboss do you have everything you need? ie enough votes ... enough discussion etc...
simboss: actually
jgarnett: ()
simboss: I wanted to ask martin if he had some time to review a bit the the proposal
jgarnett: I only see two votes ...
simboss: since last time he expressed some cocnerns
simboss: I think that we are at a stage now
simboss: that we can port to trunk
simboss: and narrow down remainig concerns while there
simboss: this thing has been sitting on a branch for enough time
simboss: risk is to lose it
sfarber: I really agree with simboss here.
sfarber: And I really need the RS work for better SDE raster support.
jgarnett: so what do we need votes from aaime, jdeolive, martin and yourself?
jdeolive: i know little of it... but i looked it over and like dit
jdeolive: +1 from me
aaime: sorry, I did not manage to review the classes that's why I did not vote
jgarnett: sweet
jgarnett: +0 then aaime?
simboss: as I say in the proposal I really need people to test this work
simboss: the combinations for RasterSymbolizer are a lot
simboss: even thought the code coverage is >= 70%
jgarnett: simboss I have paid work to make a bit of a user interface for this stuff
simboss: I would like to have someone like sfarber hitting on it
jgarnett: the moment I have that I will get more testing than I know what to do with.
desruisseaux: Just a quick note: Eclesia is working on expressing Symbology Encoding 1.1 as GeoAPI interfaces. He feels that such SE interfaces would naturally fit in a raster symbolizer. He asks if there is any chance to wait for one week until the SE 1.1 specifications is expressed as GeoAPI interfaces?
simboss: cool
jgarnett: martin I checked against the SE 1.1 proposal; there are no new ideas here for RS work.
simboss: the interface for RasterSymbolizer are the same
jgarnett: So Eclesia's work should not effect this.
Eclesia: you forget the function problem
aaime: (or, if it affects it, changes are that it will break gt2 api?)
simboss: but I am keen to check eclesia work (especially the widgets
)
desruisseaux: It would effect if we wish to use a common set of interfaces
jgarnett: Eclesia; I don't think there is a function problem; but we need to have this conversation on the geoapi list ...
simboss: is there a proposal for this?
Eclesia: SE clearly separate FeatureTypeStyle from coverageStyle at the beginning, and later in the specification a concept of Function is used
simboss: to have an idea about the impact?
Eclesia: and i haven't clearly defined this "function"
desruisseaux: Is there any chance that Simone could look at the proposed interfaces, comment, etc. and see if it would be doable to implement some of them?
jgarnett: aaime I hope to make geotools interfaces extend the geoapi ones; and unlike with filter not switch over.
Eclesia: I could comit what i've done so far on geoapi, simboss could have a look
desruisseaux: Note that it doesn't prevent Simone to move the work on trunk if he wish
jgarnett: Function is the same as Filter 1.1 function; but has the optional concept of a "default" value to use if the function implementation is not found (a very smart move). There is also a list of optional functions they recommend implementing.
simboss: what if we do this
simboss: I port to trunk
simboss: so that sfarber and jgarnett can start testing
simboss: and when eclesia is ready
aaime: I don't see why a proposal that has been in the works for month should be stopped by one that does not even have a wiki page (just my 2 cents)
simboss: I help with adapting
simboss: aaime +1
jgarnett: aaime you are correct; that is why I checked the SE 1.1 just to see if there was anything to learn.
jgarnett: (it not related to Eclesia's work; as part of reviewing Simone's work)
aaime: if the Eclesia work gets a successfull proposal we can adapt the raster symbolizer wokr
aaime: I think we'll have to adapt various other stuff (or not)?
simboss: aaime: that's why I asked for a proposal page
desruisseaux: Expressing ISO standars as interfaces and proposing a new set of interfaces (from our own mind) is not the same process...
aaime: there are a few things that are new in SE 1.1 so to handle those parsers, style factory, styled shape painter and whatnot will have to be changed
simboss: or does the proposal pain applies only to some people ;-( ?
aaime: ISO interfaces must be voted like anything else
simboss: (note that pain was not a typo
)
desruisseaux: Andrea, if so it should be voted by OGC working group, not by us.
aaime: I won't allow the usage of geoapi to circumvent the proposal process
aaime: by us too
jgarnett: understood; traditionally we have adopted geoapi interfaces when they were ready. Just like with the feature proposal. Same deal here.
aaime: since gt2 is not giont to implement any random idea that comes up with geoapi
aaime: we're the only implementor
desruisseaux: Not accurate Andrea.
aaime: geoapi is just a way few people use to control gt2 in my vision
desruisseaux: Referencing: JScience
desruisseaux: Geometry: Geoxygen
simboss: geoxygne is quite dead,
aaime: nevertheless, implementing it must be a free choice
simboss: it is not hte best example ever
aaime: not an imposition
jgarnett: on the positive side we are trying to have more controls on the geoapi side; I don't want to see the project confused by interfaces without a public implementation again.
simboss: and I agree with aaime here
simboss: changing the interface should be strongly discussed
jgarnett: yep
jgarnett: um; this upgrade to SE has been on the technical debpt page for a long time
acuster: is geotools still aiming to implement OGC/ISO ?
desruisseaux: You means changing the interfaces in the RS proposal?
simboss: the most successfull FOSS4g libraries
jgarnett: I went over that plan with Eclesia a few weeks ago.
Eclesia: ok, so i'll do SE in geoapi and if it's not to far from geotools, will make an update? is that so ?
simboss: do not even implement most OGC interface
aaime: Eclesia, you make a proposal
aaime: just like everybody else does
Eclesia: i made a jira task in geoapi
aaime: far or close it's not the matter at stake
jgarnett: Eclesia I was recommending you update the geotools interfaces first; and then "pull up" the geoapi interfaces.
jgarnett: basically don't develop the geoapi interfaces in isolation directly from the specification.
Eclesia: it's geotools that must align on OGC specification not the contrary..
jgarnett: stay in geotools with the safety/sanity check of running code.
desruisseaux: I would said the opposite: right it as close of specification as possible. Only after see if we need to make extensions.
aaime: we do not "must", we choose, it's different
desruisseaux: (typo: write it as close...)
aaime: and choose == proposal + pmc vote
vheurteaux: simboss: one of the purpose of GT2 is to follow OGC interfaces no?
jgarnett: indeed; please note that geotools style objects have been checked against SLD 1.0; there will be a very small difference between them in SE 1.1. That difference is what we need to talk about.
vheurteaux: in my understanding it's why it jump from 1.x to 2.x
aaime: vherteaux, as long as they are sane and they don't break existing software, yes
jgarnett: perhaps because I performed the check of geotools style vs SLD 1.0 I am more comfortable?
aaime: but we're not OGC monkey
simboss: vheurteaux: to be honest 1> I do not care much about OGC interface 2> who decides when to switch?
simboss:
jgarnett: the PMC does; when we have a proposal to vote on.
simboss: jgarnett: exact
OGC does not pay me
jgarnett: indeed; OGC (and thus GeoAPI) simply gives me food for thought.
aaime: switching to geoapi filter was one of these "follow ogc" decisions and we're still paying the consequences (or a badly managed transition)
jgarnett: When I like their thoughts and we have a proposal to implement; we go.
simboss: but anyway if Eclesia proposal is strong, I see no reason why we should not accept it
aaime: simboss++
jgarnett: aaime not too sure about that; we had solid WFS 1.1. needs; the problems we encountered were due to scope; and we tried to inflict the process proposal thing on geotools to cut down on that.
jgarnett: lets give the proposal process a chance to handle SE 1.1
jgarnett: (it will be a good test)
aaime: that is my whole point
desruisseaux: Referencing, after the switch to ISO 19111, has been one of the most stable module in geotools because we made a big effort to follow ISO specification. Those guys have more expertise than we have and their specification are ofter wiser than what we would have imagined on our own. A module is more stable if it has been designed in such a way that it can meet needs that we even didn't...
desruisseaux: ...imagined today, and following the work of real expert groups help us a lot to reach this goal.
simboss: it is just that switching interfaces is, IMHO, a bit step and it is good to at least know everybodu about it
aaime: I'm not against ISO or OGC per se
aaime: I'm just saying that regardeless where proposals do come from, they have to be discussed
simboss: guys: sorry to interrupt but unless I see a proposal
simboss: I will vote -1 regardless of the idea
simboss: because I don't see why I had to spend night to refactor
simboss: my work
simboss: and wait for approval
aaime: desrisseaux, "real expert group" have shown to be able and make very silly decisions due to lack or real world involvement
desruisseaux: Sometime they fail andrea, but often they success. You should also admit that.
simboss: otherwise
aaime: j2ee v2, wcs 1.0 -> 1.1 transition, inability to handle real world symbolization are just a few examples
simboss: IMHO all this dioscussion is pure phylosophy
simboss: changes require proposal and vote
desruisseaux: ISO 19111 is much wiser than what I would have imagined on my own.
aaime: of how an "expert group" can screw up big time
desruisseaux: ISO 19107 is way above the capacity of my little brain.
jgarnett: hrm; this is a good discussion; but I would like to bring the meeting to a close soon. I am looking forward to Eclesia's work; geoapi really needs the help and I am glad someone is getting a chance to fix it up.
aaime: which is very very scary by itself, if not even you can handle it, who's going to be able and use it?
aaime: desriusseaux, if you don't like it this way, make a proposal asking that gt2 follows blindly whatever ISO or OGC thing comes out and let's count the votes pro or cons
jgarnett: Eclesia I will be glad to review your work; Martin said you had a couple of weeks on this topic? From my own experience with ISO19107 it may take a bit longer.
desruisseaux: ISO 19107 is hard because it try to adress a hard topic. The fact that the earth is spherical and 3D really brings a lot of complication - but this is real world and handling that fact needs a much higher expertise than I have...
jgarnett: you are correct martin; SE 1.1. is easier
Eclesia: My work is on SE1.1 , ISO19117 portrayal and GO-1
jgarnett: but often what took me some time was thinking through how to make good consistent interfaces from these specifications.
jgarnett: fun fun
aaime: My point is not to bash OGC or ISO, it's jsut that anything using them is no different than any other work -> it needs a positively voted proposal
acuster: hg, bzr or git will render this whole discussion irrelveant
aaime: acuster, if you want to fork gt2 you don't need any of them?
***Eclesia is not making this for fun, we have needs behind those specifications
jgarnett: agreed aaime; it is how we run the project. I give equal weight to designs that have shown success (success may be running code that is a joy to use; or success may be a ISO stamp of approval showing that a group has been able to sort through more compromizes than I can image....)
acuster: open source is a fork
aaime: if you want to make an official gt2 release with that stuff otherwise the tools do not help
aaime: a proposal is still needed
acuster: proposals are only needed to change existing api
acuster: as per the developer docs
aaime: right
vheurteaux: just to understand things guys
jgarnett: right; the existing api in this case is GeoTools LineSymbolizer; we would be asking it to extend a geoapi class.
acuster: but I don't care one way or another, it's a good few years before we're even close to having working code
vheurteaux: is GeoAPI dependant to GT ?
aaime: no
acuster: i.e. a GIS
aaime: it's not
aaime: but GT uses it so we have to decide wheter we do follow it or not
aaime: if Spring screws up big time in 3.0 geoserver will stop using it
vheurteaux: ok, but Eclesia is working only on the GeoAPI part ATM
aaime: vherteaux, and that is the business of whoever is managing geoapi
aaime: the proposal will be needed when you want to implement those interfaces in gt2
jgarnett: vhrurteaux; if I was mangaging I would ask Eclesia to start from GeoTools interfaces (since they represent an encoding of SLD 1.0 that works and has been checked)
vheurteaux: so the proposal must be done in order to be used in a GT project (ie. RS)
jgarnett: add in the new stuff for SLD 1.1
jgarnett: and pull up all the get methods into a super class in geoapi.
jgarnett: (low risk all around)
aaime: the proposal must be made even when gt2 starts using geoapi sld stuff as well
aaime: since it changes the gt2 api
Eclesia: my actually classe are a merge between geoapi SLD and GeoTools styling
jgarnett: then I would present a change proposal;
jgarnett: - to add the SE 1.1 methods to getools
jgarnett: - to add in the extends section
vheurteaux: ok I see...
jgarnett: Eclesia; yes - and the two are almost identical; except where geoapi screwed up (by following a version of SLD that was never published)
jgarnett: ie geoapi has two mistakes: a version of SLD that was never published; and two copies of the style interfaces (one based around Feature, and one based around Graphic). We can do better ... and it is a nice bit of work.
desruisseaux: The style interfaces in Graphic should probably be removed.
jgarnett: (so far from talking to eclesia I see two "interesting" sections for the proposal; with actual new ideas that will be of benifit to Filter, and the idea of inline graphics)
desruisseaux: We already talked about that issue at OGC meeting.
jgarnett: yes;
jgarnett: but this is turning into a geoapi meeting.
aaime: oh well, the issues are serious
jgarnett: and we have kept the geotools meeting going on longer than I can personally aford to give it.
jgarnett: aaime; so serious it is worth starting from the proven geotools base.
jgarnett: (which is holding up very well; I just went over it with a Java 5 stick last month)
jgarnett:
simboss: guys sorry bother but could we come to a conclusion ?
jgarnett: conclusion; meeting over; I would like to review Eclesia's work on the geoapi list.
simboss: yeah and the RS proposal?
jgarnett: and I am confident that the most this will do to geotools is give us SE1.1.
jgarnett: RS proposal is a seperate matter; you have enough votes.
jgarnett: I checked it against SE 1.1
aaime: indeed, without -1 on it
aaime: you can start merging it
jgarnett: (and martin has a chance to do that as well; since he has not voted yet)
aaime: 3 days have passed since the proposal has been upgraded, no?
simboss: even a bit more
jgarnett: yep; means you can commit; the window for feedback is a little longer - but not much.
aaime: (was that two weeks?)
simboss: Eclesia: is there any chance you could wirte something about what you are doing
Eclesia: (two weeks not much compare to more than 3months for the process )
sfarber ha abbandonato la stanza.
Eclesia: you can review the jira task simboss
simboss: eclesia
Eclesia: i'll commit in the next days
aaime: Eclesia, you're right, but yet you could have started working on it after the required time passed
simboss: sorry
simboss: but looking at jira
simboss: is something I will not do
aaime: there's a reason we put a limit of 2 weeks on feedback
aaime: to avoid proposals being stalled by lack of feedback
simboss: because I have no time
Eclesia: i'll remember that, you can sure of it
simboss: just a smal page with some directions would help
aaime: the proposal process is there on a wiki page for everybody to read
simboss: exact
aaime: I'm not popping up rules out of a magic hat
acuster: proposals are great
aaime:
aaime: I would not go so far as to say they are great but they work better than not having them
aaime: from the wiki page:
aaime: "To avoid stagnation by lack of interest/time from community members the following assurances are provided:
aaime: svn access for changes is granted within 3 days from the proposal
aaime: proposal is accepted 'automatically' within 15 days (unless objections are raised) "
jgarnett: Eclesia you can start with the plan I had to upgrade Style to SLD 1.1 if you want
acuster: they are the closest we ever get to design documents for now
Agenda:
0) what is up
1) mvn vs jar names
2) dynamic sld graphics
3) SoC
4) arcsde
5) mvn vs jar names (revisit)
jgarnett has changed the topic to: 0) what is up 1) mvn vs jar names 2) dynamic sld graphics 3) arcsde
jgarnett: okay we are 15 mins into the meeting timeslot; can we start ...
aaime: sure
jgarnett: 0) what is up
***aaime trying to cram up proposals to make gt2 move a little
aaime: (if nobody stops me next thing is modular rendering)
***groldan is doing geoserver bug fixing and ensuring WFS output formats do use CoordinateSequence
***jdeolive is doing geosearch stuff and starting to play with geoserver config again
jgarnett: jgarnett; uDig can now render polygonsymbolizers from an SLD file again - happiness. Starting in on gabriel's mad plan to switch ArcSDE over to using commands in a queue rather than locks around connections.
***Eclesia improving style editor widget for SE
desruisseaux: Martin: revisiting the +/-180° longitude and +/-90°latitude limit in MapProjection, recently modified from MapProjection to a simple warning. It has more implication, like making CRS.transform(Envelope, ...) more paranoiac (they need to check if we cross the 180° limit).
desruisseaux: (typo: from MapProjectionException to simple warning)
simboss_ is now known as simboss
jgarnett: (reminds me Ecleisa; a lot of the scary code I found for style / expression stuff was in your classes :-D Try and use the SLD utility class so we can share hacks)
jgarnett: 1) mvn vs jar names
jgarnett: I think this is actuall a formal proposal from martin now?
desruisseaux: I write a wiki page with current situation, and the proposal that Jody suggested me in an IRC last Friday.
desruisseaux: (looking for the URL...)
jgarnett: page is not under "proposals" ... go fish.
desruisseaux:
jdeolive: i did not like the alternative of renaming all directories
desruisseaux: I do not like renaming directory neither...
jgarnett: jdeolive++ I agree, what did you think about just renaming the "leaf" directories?
jdeolive: i am also thinking we might want to consider the point where the ends do not justify the means on this one...
jdeolive: no, i dont like that either
desruisseaux: Continuing: I do not like renaming directory neither, but I wonder if it would be the less trouble-prone solution.
jdeolive: i dont like having to add all this complexity into our build and module structure
jgarnett: I am not sure that "Module renaming" is the goal here; the goal seems to be to make our project a bit more normal with respect to maven? So more maven plug-ins work ...
jdeolive: just to get a particular report or artifact to "work"
desruisseaux: "mvn site" is not the only Maven plugin which could benefit from a layout 100% conformant to Maven expectation. Last time tried some other plugins was working better with Maven conformant structure (I admit that I don't remember which one... was a long time ago)
desruisseaux: Renaming directory make directory names uglier, but the build simplier since it allows us to remove all custom settings
jdeolive: so what was the problem with just removing the gt2-prefix?
desruisseaux: (<finalName>, <scm>, <url> and the like)
jdeolive: naming conflicts?
aaime: yes, various of them
desruisseaux: They were this H2 name conflict that I didn't spotted before commit (because we don't use H2 on our side...)
aaime: between moduels in gt2 (wfs datastore and wfs model)
ggesquiere [n=gilles@arl13-3-88-169-136-131.fbx.proxad.net] entered the room.
aaime: and with other projects
jdeolive: ok, so lets just rename those modules and tell people that they cant have two modules with teh same name in geotools
aaime: plus look on maven... most of the time each jar brings the proejct name along
jdeolive: not sure i undrestand the h2 one?
jdeolive: aaime++
jdeolive: that seems to be java standard for major prijects like spring
aaime: h2-2.5-SNAPHOST : datastore
jdeolive: why dont they have tehse issues? or do they?
aaime: h2-1.0-snapshot: h2 itself
zzorn left the room (quit: Remote closed the connection).
aaime: major projects are not using maven afaik
jdeolive: ok, so lets rename h2 then
aaime: (not sure)
desruisseaux: I think that Glassfish use Maven (not sure...)
jdeolive: they must in some wat to get there artifacts into the maven repos\
jdeolive: wat = way
aaime: jdeolive, not sure they are doing that
aaime: there may be some non developer type that does it
jdeolive: ok... they must have fun with snapshot versions then
jdeolive: regardless
aaime: I asked various times to have jars pushed into the maven repos (not for major porjects thought)
jgarnett: jdeolive++ you are correct we could rename "h2" as "data-h2" or "h2-plugin"
desruisseaux: "h2-store"?
jgarnett: or "geotools-h2"; but then the temptation is to be consistent.
jdeolive: how abotu h2-spatial
jgarnett: jdeolive++
desruisseaux: Yes I like h2-spatial
jdeolive: cool
aaime: this is spring:
aaime: still using cvs, and still using ant
desruisseaux: (h2-plugin would be inconsistent with other plugins; we don't put -plugin suffix everywhere)
SamHiatt: I'd vote on that name.
jgarnett: for the main library modules we run into the trouble that our module names are pretty generic; "main.jar" and so on ...
jdeolive: agreed
SamHiatt: I mean, I'd vote Yes to that name.
desruisseaux: But on the proposal issue: we have a choice:
jgarnett: has the list of "spring-aop" etc...
desruisseaux: 1) rename directory - ugly but really lead to the simpliest and less troublesome Maven setting we could have.
jgarnett: back in a second.
***Eclesia must go ++
desruisseaux: 2) Keep current state (with h2 renaming) - better directory name but requires Maven hacking
Eclesia left the room.
aaime: here is wicket:
aaime: prefix in the dir names
aaime: (using maven as we do)
jdeolive: i definitley vote for 2
aaime: I prefer 1 I think
desruisseaux: Jody?
aaime: this is hibernate:
aaime: (flat names)
jdeolive: aaime: is your reason for vorting on 1 so we can have maven artifacts prefixed with gt?
aaime: so that it's easier to deal with maven
aaime: maven is very fragile
aaime: so I expect troubles if we try to work in a way different than it expects
jdeolive: ok, i have a question
jdeolive: for martin
jgarnett: back
desruisseaux: Yes Andrea, Maven is very fragile. This is the reason I'm tempted by 2 while I would normally prefer solution 1.
jdeolive: what was the original issue?
desruisseaux: Oups, opposite
jdeolive: that brought about the original renaming?
aaime: making mvn site work
desruisseaux: Tempted by 1 will I would prefer 2 if Maven was more solid
jgarnett: hibernate confused me a bit; do they actually have a core.jar ?
desruisseaux: "mvn site" among other, but not only.
acuster [n=acuster@rab34-2-82-230-29-3.fbx.proxad.net] entered the room.
aaime: jgarnett, no, in the repos, they have prefixed names
desruisseaux: I wonder how they get prefixed names in the repository...
aaime: so they are in the same situation as us... but with a twist... I don't see anything that makes the prefixes for the repo
aaime: my guess: manual process or automated with a script other than maven
jdeolive: question, what do we need site for?
aaime: all the reports maven generates end up in site
acuster has changed the topic to: 0) what is up 1) mvn vs jar names 2) dynamic sld graphics 3) arcsde 4) graduation status 5) project updates
aaime: javadoc, coverage, dependencies, ....
jdeolive: sure
jdeolive: but those reports work just fine
jdeolive: or maybe i am wrong
aaime: if you run them on a single module, yes
aaime: not if you try to make a full build
jdeolive: so you cant generate javadocs for a full build?
desruisseaux: javadoc and all site
aaime: javadocs do work, other do not
desruisseaux: and there is other mojo plugins that I tried a while ago that doesn't like non-standard layout
jdeolive: coverage erports work
jdeolive: i have done them before
desruisseaux: (don't remember which one - maybe it was the one listing the todo tasks, or last svn commits, etc.)
aaime: wow, never managed to?
aaime: cobertura crashed, the commercial one had issues as well
jdeolive: anyways... the point i am getting to is my original
jdeolive: is this really worth it
desruisseaux: I would claim that yes
desruisseaux: As Andrea point out, Maven is very fragile
jdeolive: ok... well all the reports i deem necessary work...so i would say no
desruisseaux: mvn site is not the only plugin having trouble with non-standard layout
aaime: we have lots of confused users that could enjoy a good mvn:site result
jgarnett: aaime for hibernate-core can we see how they did the prefixed names in the repos ?
desruisseaux: The links are broken, unless you put <url> declaration in every pom.xml
aaime: jgarnett, yes, it seems they did
desruisseaux: And experience show that we always endup with wrong URL declared in pom.xml
jdeolive: these issues seem more like maven bugs to me
desruisseaux: Yes definitively, we have trouble with Maven being fragile.
desruisseaux: Note that I'm fine with either solution 1 or 2. The solution that I would like to avoid is what we had before...
jgarnett: looking at how hibernate pulled it off.
jdeolive: not sure i woudl call it fragile... just because some obscure thing with site generation does not work...
desruisseaux: I looked at the pom.xml and didn't found a clue...
jdeolive: i mean... thats probably why a lot of projects do not use the site stuff
jgarnett: has a couple hacks that look similar to what we need? they morph things into a different shape for deployment ?
aaime: I still dont' get how they manage to add the prefixes
desruisseaux: Well, in the case of hibernate the JAR are prefixed in the repo simply because their artifactID are prefix. So hibernate is doing what we did before.
desruisseaux: (typo: their artifactID are prefixed)
aaime: yeah. Whilst on the other side wicket is generating their site with mvn site
aaime: so they added the prefixes
desruisseaux: So they have a directory/artifactID mismatch, like we did.
jdeolive: my preference is we leave things the way we had them before
jdeolive: it seems silly to me to jump through all these hoops for a non-necessary maven feature
jdeolive: but ... that is why we have a PSC
jgarnett: martin they have a bunch of site.xml stuff; is that what they use to untangle mvn site:site
desruisseaux: I feel it necessary...
aaime: (wicket site is generated by maven, incredible as it it, it's true... or at least it was before they switched to apache... not sure now)
jdeolive: desruisseaux: i my definition of necessary is "the project wont run without it"
jgarnett: I am halfway between
desruisseaux: It is valuable then...
jgarnett: we cannot afford another two days downtime
jgarnett: but the project should not really be running without code coverage results either.
jdeolive: jgarnett: come on
desruisseaux: And again, "mvn site" is not the only plugin.
jgarnett: I spend a couple days a year getting coverage results. Not saying this a is a great reason jdeolive; just admitting it is the truth.
desruisseaux: In the way we had before, we need more declaration in every pom.xml. Most peoples do not keep those declarations up-to-date.
jdeolive: thats life with maven...
desruisseaux: More than one pom.xml had wrong <scm> or <url> declaration.
jdeolive: it seems we used to do what most projects do
jdeolive: so i am against all of this
jdeolive: so consider my vote a -1
desruisseaux: Yes thats life with Maven. My hope was to simplify our life by relying on maven defaults.
***jdeolive is bowing out of conversation
jgarnett: from my stand point we are bit stuck here are we not? This proposal is not clear cut ready to go; and we made a mistake by trying to remove prefixes last week without a proposal (and without understanding the consequences)
aaime: jgarnett++
jgarnett: so what are we left with ... we have 10 mins left and some more agenda items that are also important.
jgarnett: martin your proposal as written will work?
jgarnett: and does not involve renaming any directories
jgarnett: can we use it until such time as <finalName> is fixed in maven?
aaime: jgarnett, the fix maybe 6 monts or 2 years away
jdeolive: submit a patch
it worked for me with the eclipse plugin
aaime: maven bug fixing is even worse than gt2 one
jgarnett: good points all
jgarnett: The alternative proposal (where you rename leaf projects)
aaime: jdeolive, heh, I tried to work on maven a bit but really, that project has a serious documentation problem
jgarnett: has two down sides; merges are hard - and makes working on geotools expensive for everyeone.
aaime: I could not even find the javadocs...
jgarnett: and it does not completely fix the problem does it?
desruisseaux: I should fix it completly I believe
desruisseaux: The alternative proposal would leads to a 100% Maven default compliant layout.
jdeolive: until the next problem comes up
aaime: jgarnett, do you merge from the root of the modules? I usually do it inside th module otherwise it's too slow
desruisseaux: Next problem are less likely with default layout than default one.
desruisseaux: (typo: than custom one)
jdeolive: thats a naive view, considering that there are 3 other major projects that depend on geotools
jgarnett: aaime I do not understand the question ... "merge from the root of the modules"
aaime: jgarnett, I usually do merges inside the module that has the modifications
jgarnett: next question; both proposal end up with the same artifact names do they not?
aaime: so changing the name is just a matter or movign to a different directory
aaime: if you merge from the root of the gt2 codebase on the contrary
desruisseaux: No, the alternative proposal would have only "gt-" prefix, not "gt-lib" or "gt-ext".
aaime: changing names will make merge fail
jgarnett: oh I see see; svn merge; I usually merge from root; except when merging from 2.2.x - in which case I cry.
acuster has changed the topic to: 0) what is up 1) mvn vs jar names 2) dynamic sld graphics 3) arcsde
aaime: jgarnett, that's way too slow for me
aaime: (from the root)
aaime: it takes minutes before the merge starts
jgarnett: interesting; I did not know that aaime.
aaime: that's why I did not care about renamed directgories
jdeolive: i would rather wait minutes then have to run the command for 80 different modules
jgarnett: guys I need to wrap up this topic; can we return to it at the end of the meeting?
desruisseaux: Jody has fast network connection since he is close to the svn server...
aaime: jdeolive, right, but when do you have such massive changes?
aaime: (do you backport them?)
jdeolive: when you move classes and things like imports change it can happen easily
jgarnett: jdeolive++
jgarnett: guys I am going to move us to the next agenda item...
aaime: correct... (never happened to me thought, since I'm more of a bug fix man than a refactor the world one)
jgarnett: martin I will be happy to talk about this a bit more after.
jdeolive: i agree 80 is unlikely, but 10 is not
desruisseaux: Jody, okay.
jdeolive: different strokes for different folks
jgarnett: 2) dynamic sld graphics
jgarnett: understood; but
jgarnett:
jgarnett: there we go...
jgarnett: this one is you aaime
aaime: well, the proposal says it all
jgarnett: (and also a nice user who has been sending us lots of email and trying out alternatives in code)
aaime: the problem is making Mark and ExternalGraphic easily extensible
jgarnett: I have seen three approaches; the one andrea has written up is good ... and mostly I want to see this problem solved.
aaime: and allow feature attributes to be used in the definition of the symbol name
aaime: or the url that points to it
aaime: without breaking the SLD standard
aaime: or our object
wolf [n=wolf@hoasb-ff0add00-126.dhcp.inet.fi] entered the room.
wolf: hi
aaime: it all boils down to a creative use of CQL expressions templates withing the names specified in
aaime: the sld
aaime: as proposed
aaime: it allows to support extensible symbols in both marks and external graphics
aaime: generate charts, both locally and using a remote server
aaime: use a fully programmatic driven symbol set like mil2525b
aaime: and generally speaking allowing everybody to create plugins for whatever format they fancy
jgarnett: Hi wolf will put you down next
aaime: thought I'm personally interested in waht's easily reached thrur internet
jgarnett has changed the topic to: 0) what is up 1) mvn vs jar names 2) dynamic sld graphics 3) SoC 4) arcsde
wolf: jgarnett: ack
aaime: that is, symbols and fonts
aaime: that internet is full of
aaime: and that we cannot use nowadays
aaime: The hard part imho is not writing the parsers, but writing the symbosl
aaime: so we want to leverage what's already out there
aaime: that's it
aaime: questions? observations?
jgarnett: aaime; I only had one bit of feedback (that I sent to email); I suspect when the OGC does solve this problem they will use <PropertyName> and not CQL ... does not matter too much to me either way.
jgarnett: other than that lets rock and roll.
aaime: when they do
aaime: we can just have the factories receive the resolved expressions
jgarnett: We have a user slated to give us some MIL2525B stuff, so that will help vet the system with a real world example.
jgarnett: aaime++
sbenthall [n=seb@cpe-66-108-80-238.nyc.res.rr.com] entered the room.
aaime: but... we have an alternative... preparse the url so that they become expressions themselves
aaime: (did not think about this one in fact, could be interesting)
cholmes [n=cholmes@cpe-66-108-80-238.nyc.res.rr.com] entered the room.
aaime: say we take the url with embedded cql expression and we turn it into an OGC Expression
aaime: that we pass to the factories along with the Feature
aaime: (we use the string concatenation functions and the parsed cql expressions to make it up)
jgarnett: The data structure right now already holds an Expression
jgarnett: (at least last I looked)
aaime: that would require changing the parser... sigh
jgarnett: parser already generates a literal expression
jgarnett: you can use a visitor to attack that and do you CQL subst
aaime: yeah, but the parsing needed to turn mil2525://$
into an expression is nothing like standard sld parsing...
aaime: thought... yeah, it's doable
jgarnett: nope I am wrong sorry man
jgarnett: it is a URL
aaime: ah, for ExternalGraphics, yes
jgarnett: (I was getting confused with WellKnownName which is already an expression
aaime: yeah, it's an attribute, so an Expression does not make sense
jgarnett: okay moving on ... can we call a vote on this so aaime can get to work
jgarnett: jgarnett +1
aaime: sure, if there are no objections...
CIA-31: groldan 2.4.x * r29926 geotools/modules/library/main/src/ (3 files in 2 dirs): GEOT-1769, Use CoordinateSequence to write down GML2 coordinate lists, instead of Coordinate[]
aaime: aaime +1
aaime: and... that's it?
jgarnett: aaime I am a bad person; I left the meeting run over time.
aaime: well, let's move the vote to the ml then
simboss: poor aaime
jgarnett: jdeolive voted on email
aaime: simboss... poor what... you have a vote you know?
simboss: yep
simboss: I like this thing
simboss: I actually worked on something similar for nato
aaime: and? or... but?
simboss: but of course they never released it
simboss: I have to read the proposal
aaime: ok, voting on mail is good then
simboss: I am sure your approach is much better than mine
simboss: one curiosity
aaime: ha, who knows
simboss: did you look at APP6?
grrrrr [n=groldan@217.130.79.209] entered the room.
simboss: beside milstd2525b I mean
aaime: no, what's that?
jgarnett: okay next ...
simboss: similar simbolozgy
jgarnett: 3) SoC
jgarnett: wolf you have the floor
groldan left the room (quit: Nick collision from services.).
simboss: just more complex
aaime: I did not look into mil2525b either, theunsgis did that
grrrrr is now known as groldan
jgarnett: We are trying to see if the April 18th deadline is all we need to know about.
wolf: Okay, you need to decide on what projects you want
SamHiatt: desruisseaux: you still here?
wolf: preferrably by tomorrow 12:00 UTC
desruisseaux: Hi Samuel
SamHiatt: I was trying to build pg last night...
SamHiatt: had problems with the new gt renaming stuff...
acuster: hey sam, we're in a meeting right now, if you can wait a second
desruisseaux: Oups, sorry.
SamHiatt: Do you think If I rolled back to before the changes that I will be able to build?
wolf: Google has set a dealine on 19th april 07:00 UTC for assigning mentors, but just to be safe we want to do it on the 17th
SamHiatt: acuster: sorry! I thought the meeting was done! :Z
wolf: any more questions concerning SoC?
jgarnett: So 17th = tomorrow at noon?
wolf: no
simboss: today is 14th
wolf: tomorrow is 15 th
jgarnett: "wolf: preferrably by tomorrow 12:00 UTC"
simboss:
CIA-31: saul.farber * r29927 geotools/gt/modules/plugin/wms/src/main/java/org/geotools/data/ (2 files in 2 dirs): two minor code cleanups
jgarnett: So simboss you seem to be the man with the plan this year; what do we need to do? Make sure the projects we like have mentors? And vote them up ...
wolf: that is a very strict preferrably
Like I'll bite you if don't make it
acuster: SamHiatt, try on #geomatys ("/join #geomatys")
jgarnett: and here I thought his bark was all we had to fear.
simboss: well I have found 2 projects which are interesting
wolf: Once I get the list of proposals each project wants (in order of preference) I'll adjust the scores
simboss: one actually is more like acontribution
simboss: the pyramid jdbc plugin
simboss: the second one is from a guy here in italy
simboss: he want to try and integrate better jpeg and png support
simboss: through jmagick+imagemagick
jgarnett: so simboss can we take this to email; and actually talk about the proposals? or is this going to be a case of no time?
simboss: yeah I can do that
jgarnett: (we are 15 mins over meeting time; and two agenda topics to go ...)
jgarnett: 4) arcsde
CIA-31: groldan * r29928 geotools/gt/modules/library/main/src/ (3 files in 2 dirs): GEOT-1769
sfarber [n=sfarber@88-149-177-23.static.ngi.it] entered the room.
jgarnett: gabriel asked me to warn people I was hacking arcsde datastore
jgarnett: I am starting to implement the plan he talked about on IRC a couple of weeks back.
aaime: what's the mad plan about?
jgarnett: So I will ping groldan and sfarber as I go.
aaime: (30 word summary?)
groldan: here
jgarnett: the idea is that locking the connection is killing gabriels happiness; and he wants to switch to a queue.
jgarnett: I need to make arcsde datastore run with only one connection; so I need to let the "metadata queries" use the current transaction / connection
jgarnett: (ie asking about table info etc...)
aaime: hmmm... what is the transaction-connection relationship in SDE? in jdbc is 1-1
groldan: sde connections are not thread safe, we use sde native transactions held at a connection, we need to access a connection concurrently, locking a connection for the lifetime of a FeatureIterator is hard to maintain and a performance killer
jgarnett: so even with all of that we have two ways in which arcsde datastore works; transactions for work; and AUTO_COMMIT for getFeatureInfo( typeName )
jgarnett: I need to narrow the gap and have them share the same connection.
jgarnett: (there is a big IRC discussion about this design here)
jgarnett: it is what I will be working from.
aaime: where do you set the current connection? pass as a parameter, use it as an "in the environamet" param like a thread local?
groldan: its at the transaction state
jgarnett: Think a runnable that takes a connection as a parameter
jgarnett: this is mostly a warning; I am sure I will have questions as I go; I well send all email to the devel-list as it happens.
jgarnett: 1) mvn vs jar names part ][
groldan: the idea is to divide execution units in as tiny bits as possible and enqueue them, that provides for a more granular use of the connection, as oppossed to blocking the connection until a FeatureIterator is fully traversed
aaime: groldan, I get that... this seems to be doable only in read only access thought?
groldan: why?
aaime: well, transaction need to be isolated
aaime: if you have a long transaction running you cannot share the same connection with other transactions no?
groldan: executions units are: "fetch a feature", "insert a feature", etc. All of them work with the active transaction
groldan: okay
aaime: right, so you cannot share the same connection between differetn transactions, no?
groldan: addFeatures() is an execution unit per se
jgarnett: aaime I am only planning to share a connection for the read-only activities; ie the ones used by ArcSDEDataStore to look at metadata.
groldan: but its ok because a getFeature called while an addFeature is in progress needs to wait till addFeatures returns
groldan: so the queue still works?
aaime: groldan, what if you're scrolling with a feature iterator?
aaime: using that damn thing you don't know if you're reading or writing
groldan: not quite getting the point
aaime: probably me neither
aaime: too used to jdbc way of doing things
aaime: never mind
groldan: in jdbc you have different transaction isolation levels
groldan: right?
jgarnett: yes
aaime: that's not my point
groldan: okay
aaime: in jdbc you cannot allow two transactions to access the same connection
groldan: so I don't get it
groldan: here neither
aaime: but if you're using a feature writer or a feature iterator
groldan: its all about how you define the bounds of your transaction (the runnable)
aaime: you need to keep the connection "locked"
jgarnett: what is hard here aaime; is that jdbc connections are pretty threadsafe?
jgarnett: arcsde connections are not
jgarnett: so we need to wrap them in a lock or face death
jgarnett: gabriel wants to use a queue rather than a lock.
aaime: no, the fact that once you open a connection for a transaction, you cannot share it anymore
groldan: so aaime what you describe maps to a certain "transaction isolation level"?
jgarnett: I though you can share the connection with another thread?
aaime: sorry, so the problem is multiple threads trying to acecss the same transaction concurrently?
jgarnett: yes; that is the problem
aaime: (only if the other thread is using the same transaction)
groldan: the point of using a queue rather than a lock is that the connection is gonna be used by a single thrad, even when serving multiple threads
jgarnett: multiple threads; one connection; how not to screw up
aaime: aah, sorry, I was thinking about geoserver where different threads = different users = different transactions
simboss_ [n=chatzill@host253-202-dynamic.37-79-r.retail.telecomitalia.it] entered the room.
jgarnett: okay moving on?
groldan: yeah, its a per transaction queue
aaime: I guess your case is udig, multiple threads, one writing, the other rendering, but all against the same connection
sfarber left the room (quit: Read error: 104 (Connection reset by peer)).
groldan: yes
aaime: yeah sorry for bothering
jgarnett: desruisseaux ping?
desruisseaux: Jody, yes?
jgarnett: 1) mvn vs jar names
desruisseaux: yes
groldan: np, trying to explain it really helps
aaime: (not really sure jdbc connections are suposed to be thead safe thought... never mind, go on)
jgarnett: stack.pop()
jgarnett: Is everyone sick of this topic? I am really tempted to rollback the changes until we have a clear direction that will work; is that a horrible idea desruisseaux?
***aaime jumps out of the window
jgarnett: (good thing aaime is on the ground floor)
desruisseaux: If we roll back, does it means that this issue will be forgetten?
aaime: jgarnett, do you remember the iron bars at the window? ouch!
groldan: aaime: mind moving to #geoserver?
jgarnett: and here I though aaime was skinny
jgarnett: martin I don't think this will be forgotten; if I was scheduling stuff I would tackle this right before we release 2.5.0 (or 3.0.0).
jgarnett: ie we have so many problems right now on trunk with functionality; interrupting anything for a few days to sort out some build problems just upsets everyone.
sfarber [n=sfarber@88-149-177-23.static.ngi.it] entered the room.
jgarnett: It would also help to arrive at this with a proposal we know works; in this meeting we ended up discussing a bunch of "What-ifs" and we had no way to vote +1
CIA-31: groldan 2.4.x * r29929 geotools/modules/library/main/src/main/java/org/geotools/gml/producer/GeometryTransformer.java: organize imports in GeometryTransformer
jgarnett: so a couple of questions:
jgarnett: - Even with this proposal we still have a bunch of work unaccounted for; updating the user guide examples etc... is this something we can get funding / time for?
desruisseaux: In this particular case, it may be difficult to know if a proposal work before we try it on trunk, since some problem appears when peoples user the dependency. For example we couldn't have spotted the h2 conflict on our side since we don't have any project using h2...
jgarnett: - Lining up with maven expectations is a good thing; we will run into less problems.
jgarnett: right; so now that we tried it once we have a second test to try... that maven-ant-tasks script.
jgarnett: um and I still cannot deploy? being able to deploy probably needs to be another test.
desruisseaux: I have been able to deploy last friday.
aaime: It would be nice if all jars had always the same name, no matter how they are genreated
jgarnett: I was unable to deploy today ... see email.
aaime: (nice == really required...)
desruisseaux: I means, I have run "mvn deploy" after the prefix removal and it worked for me.
aaime: jgarnett, looked at the mail, the error message means nothing to me...
desruisseaux: If we want the same JAR name both in the repository and in the target/binaries directory, then it would be a push for alternative proposal (renaming directory) or a rollback.
jgarnett: thanks... I was clueless as well so I asked for help. I could deploy last week.
jgarnett: martin I think that is what we want.
jgarnett: I would personally rollback;
jgarnett: and then schedule this proposal for the "release candidate"
jgarnett: and make sure we have enough hands on deck to do the documentation and so on.
jgarnett: But I also want to understand the hibernate build process; since they seem to have it working ...
aaime: jgarnett, they do not generage mvn site afaik
desruisseaux: Release candidate would be too late, because it let too few time to spot the problems. It also means that we have to live months without daily site and javadoc.
jgarnett: I don't know how much flexibility you hae in scheduling martin?
SamHiatt: nice one aaime
jgarnett: well noticed aaime
***aaime checks out hibernate and sees what the heck comes out of mnv:site
desruisseaux: We could do that later too, but I would really love to have site and javadoc generated every days as soon as we can...
jgarnett: agreed martin
jgarnett: (I am not sure if that gets through since this has been such a heated topic; having mvn site working is a very good goal - it costs me days a year not having access to the coverage reports)
aaime: +1 me too... site is good
jgarnett: so I want to see how to get that at not too high a cost
aaime: test coverage?
aaime: why?
jgarnett: I had thought renaming the folders was too high a cost; but apparently it is not the case for aaime.
wolf left the room ("Konversation terminated!").
jgarnett: because I often have to meet test coverage requirements for a deliverable.
aaime: I see
aaime: jgarnett, yes, it's a by product of how svn network protocol works
aaime: it does lots of tiny reqeusts for merges and updates
jgarnett: so as I understand it geoserver and udig are working right now
aaime: so if I know where the change occurred, I can cut down most of them
jgarnett: but with known problems.
aaime: down from one minutes to a few seconds
jgarnett: (that is interesting aaime; later I may ask you for an example)
jgarnett: aaime we are pretty sure we want the gt-main.jar style in the repository right?
jgarnett: You can use martin's jar collector to fetch things out in the right shape.
desruisseaux: I would be okay for a rollback if there is good chance that this topic do not fall in a black hole for months or years...
aaime: imho yes... that's what everybody is diong
aaime: and frankly
aaime: would you like a main.jar in your classpath?
aaime: main of what?
aaime: if you depend on 20 external libraries, think if every one did the same
aaime: nice, hibernate does not build out of the box... they must have a damn private repo
desruisseaux: Well, as Jody point out, we don't have this problem when using jar-collector. But this is a non-standard solution.
jgarnett: understood; and the maven "answer" is not that cool; it asks that you keep a directory structure around to keep stuff straight.
jgarnett: so martin; even though udig and geoserver can probably be made to work
jgarnett: I honestly think we need to put gt-main.jar into the repository for end users
jgarnett: we could negotiate
jgarnett: if we can make monthly releases for users that would help offset this.
jgarnett: but I would like some more users at some stage :-D
simboss left the room (quit: Connection timed out).
simboss_ is now known as simboss
desruisseaux: In this case, and if we want a directory layout matching the module name, there is no other choice than renaming directories...
jgarnett: I would like to agree on that; aaime what do you think?
aaime: jgarnett, found hibernate secret sauce: they are still not using maven
aaime: they only have it on trunk
aaime: but they are releasing from a stable branch using ant
jgarnett: The remaining questions for me come down to timing.
jgarnett: but how do they release to the maven repository?
jgarnett: using the maven-ant-tasks to deploy?
jgarnett: (or some other madness)
aaime: no idea
aaime: manually I believe, this pom is made up, it has no parent:
aaime: wicket does it with maven, but they renamed the dirs
aaime:
aaime: thing is, they have a very simple structure compared to gt2
jgarnett: question in the wicket pom.xml they have
jgarnett: <moduleSets> <moduleSet> <includes> <include>org.apache.wicket:wicket</include>
jgarnett: what is that syntax for include ?
aaime: dunno... the modules tag is standard
jgarnett: oh that was in wicket-assembly-all.xml
aaime: don't know... never tried to make a maven assembly
jgarnett: does assembly have anything to do with what is deployed?
aaime: I don't believe so... it's for releasing no?
desruisseaux: It put the JAR a single ZIP file.
desruisseaux: (including dependencies)
jgarnett: If I look here
jgarnett:
jgarnett: it really looks like hiberante is deploying a single jar
aaime: yeah, they modularized it on trunk only
aaime: all of the other hibernate-something in maven repo are completely separated projects
jgarnett: yeah; okay - reminded that you said that already.
aaime: different version numbers and different svn
jgarnett: so martin it looks like we are boiling down to a single worhtwhile proposal
jgarnett: can we update that page; and include the steps for updating the user guide.
jgarnett: and aaime can you think of a good time to try this again for geoserver?
desruisseaux: Okay, will do tomorrow.
aaime: wondering... would be fixing mvn:site so hard? Or else, do we have reproted an issue there?
aaime: jgarnett, don't know, I haven't been working on gs trunk for ... a month maybe
jgarnett: I wondered as well andrea; but my guess is mvn site:site gathers up so many reports; that any of them could break us.
aaime: you break it, I won't even notice unless you break the build hard
jgarnett: I see; well uDig is using trunk hard; and I lose billable hours and have to have a meeting whenever trunk goes down for days
aaime: yet today I was working on it... trying to reduce build times (unacceptable long)
jgarnett: So I would like a couple days warning; and to try this out on a weekend next time.
desruisseaux: Fixing patially mvn:site is tedious but doable. One of my issue is that it requires explicit <scm> and <url> declarations that nobody maintains, and which always become wrong with time.
desruisseaux: But even if we get partial mvn:site, some reports will work and some other reports will not.
aaime: desriusseaux, yeah, but I was thinking for or a real bug fix in the maven site code, not a workaournd?
desruisseaux: Each reports is an independant plugin, so whatever or not is a plugin-by-plugin question.
aaime: yeah, ok, I get it, everybody just assumes users follow the maven standard practices
desruisseaux: Some assumes, other do not...
jgarnett: agreed; remember we had a lot more success when we renamed the src/main/Java folders .... it pays to not push the maven plug-ins beyond the defaults
jgarnett: (as sad a statement as that is)
aaime: jgarnett, really? I always wondered why
aaime: (why we changed, that is)
aaime: now that was really breaking merges, since you had to do two separate merges for each fix (one for the src, the other for tests)
jgarnett: I ended up having a lot less maven trouble after the change.
jgarnett: heh; I still mege from 2.2.x
desruisseaux: For having default (make pom.xml simplier and more reliable), but also for making some room for other language if we wish (I admit we are not yet there, but I was thinking about SQL, PhP or JavaScripts...)
jgarnett: wow the main stream press are on to us:
aaime: ok, I'm out of here.. bed time
jgarnett: aaime just a sec
jgarnett: I was trying to see about timing;
aaime: erk...k
jgarnett: you are telling me you don't care - and it is up to me?
desruisseaux: I will post an email tomorrow about the module name issue.
jgarnett: ie my request for a few days warning?
aaime: timing?
jgarnett: ie when should we do this...
aaime: I said I personally don't care
aaime: but other developer will care, and a lot
aaime: ask jdeolive
jgarnett: okay
desruisseaux: I will post an email tomorrow. Would it be okay Jody?
jgarnett: so martin; should we roll back the prefix change?
jgarnett: And then work through this proposal process ....
jgarnett: (that is what I am trying to figure out)
jgarnett: do we limp along right now; or do we rollback ...
desruisseaux: Could it be a question in my tomorrow email?
jgarnett: yes it can
jgarnett: okay thanks for the extra long meeting of doom
jgarnett: good luck with your symbol hacking aaime
jgarnett: (I am looking forward to it)
aaime: thanks
jgarnett: I can post the logs...
desruisseaux: Thanks all for your patience.
sfarber left the room.
aaime left the room.
Agenda:
- what is up
- GeoTools 2.4.2 release
- update headers
- gt2- prefix removal
- progress module
- postgrid in gt2
SamHiatt: Martin: I was considering bringing up PG in the meeting to see what other people's interest in it is... and to discuss how to frame the module, once it moves to geotools.
ggesquiere left the room (quit: Client Quit).
desruisseaux: Sure if you wish
ggesquiere [n=gilles@arl13-3-88-169-136-131.fbx.proxad.net] entered the room.
desruisseaux: But I'm not sure that it is mature enough...
aaime [n=aaime@82.56.105.98] entered the room.
desruisseaux: (would probably be an unsupported module for a while...)
jgarnett: morning
jgarnett: meeting time?
jgarnett: (we moved it an hour earlier; ie now; did we not?)
desruisseaux: yes
aaime: yap
gdavis [n=gdavis@mail.refractions.net] entered the room.
jgarnett: sweet
jgarnett: aside: thanks for the email over the weekend aaime - I will try and stay a bit more on target with udig stuff.
jgarnett has changed the topic to: 0) what is up
aaime: np, sorry for being mad
aaime has changed the topic to: 0) what is up 1) GeoTools 2.4.2 release
jgarnett: it is okay; I have broad sholders; a share of the blame; and I know it is a frustrating topic.
desruisseaux: Topic: gt2- prefix removal in module names
user451 [n=user451@mail.refractions.net] entered the room.
jgarnett: Anyone know if exclisa is around today? It would be fun to talk process stuff with him...
desruisseaux: I don't known...
jgarnett: aside: aaime I was reviewing filtervistors stuff over the weekend and liked what I saw of postgis-versioned module.
jgarnett has changed the topic to: 0) what is up 1) GeoTools 2.4.2 release 2) update headers
aaime: jgarnett, eh, as they say, plan to redo once
aaime: with some modifications to the query support I could do it better
desruisseaux has changed the topic to: 0) what is up 1) GeoTools 2.4.2 release 2) update headers 3) gt2- prefix removal
aaime: without making it postgis specific at al
jgarnett: indeed
gdavis: Topic: unsupported/process module committed
jgarnett: well it would be fun to see the same api backed on to arcsde versioning madness
jgarnett has changed the topic to: 0) what is up 1) GeoTools 2.4.2 release 2) update headers 3) gt2- prefix removal 4) progress module
aaime: jgarnett, yes, that's a reason I did not try to push the versioning interfaces into gt2-api
aaime: (lack of 2nd and 3rd implementation of the concept)
jgarnett: oh: I did look at the renderer; there are a few hacks to check if query was implemented (with respect to reprojection), I cannot tell if they are used - but they exist.
aaime: any other topic?
jgarnett: aaime++ yeah; I think we have 3 clients for the process module so I am hopeful this one will work; but also wanting to check on Eclesia.
jgarnett: I think we better start...
jgarnett: 0) what is up
desruisseaux: Martin: full time on postgrid...
***aaime trying to get geotools 2.4.2 out of the door
jgarnett: jgarnett - hacking up process module with gdavis; going to build a swing user interface for fear of putting too much information in javadocs where it would be ignored.
gdavis: gdavis: same as jgarnett
SamHiatt: whooops... I'm not really away.
SamHiatt: Too late to add a topic?
aaime: no, if it's a quick one
CIA-31: jgarnett * r29830 geotools/gt/modules/unsupported/process/src/ (15 files in 10 dirs): code review of process module part 4, finally with example and test case
SamHiatt: Just concerning how to fram the postgrid stuff once it makes it to gt-unsupported
aaime has changed the topic to: 0) what is up 1) GeoTools 2.4.2 release 2) update headers 3) gt2- prefix removal 4) progress module 5) postgrid in gt2
jgarnett: 1) GeoTools 2.4.2 release
jgarnett: aaime floor is yours
aaime: Any objection for me to go and release?
aaime: (any solid objection)
desruisseaux: All right on my side
jgarnett: sounds good.
aaime: ok
SamHiatt: why would I object?
jgarnett: (If you had a few moments to look at GeoServerOnlineTest case (and see if I am missing something stupid) it would make me happy - I hate the WFSDataStore not working)
CIA-31: jdeolive * r29831 /geotools/tags/2.4.2/: Tagging gt2 2.4.2
jgarnett: do you need any help on the release aaime? Jira and what not...
aaime: (that is really me using jdeolive account)
jdeolive: its /me twin from the alternate universe
jgarnett: aaime++ good way to confuse svn blame.
aaime: jgarnett, announcements as usual beat the hell out of me
jgarnett: anything else for this topic?
aaime: jgarnett, sorry, I'm using a shared VM
jgarnett: okay ping me when you have the anouncement text ready; sending email is a good "Waiting for the build" task.
aaime: I'll do... tomorrow
jgarnett: so ... next topic?
aaime: now I don't even know if I have enough time to make a deploy so that Mike can release GeoServer
aaime: yap
jgarnett: 2) update headers
jgarnett: martin and acuster answered some of my questions last week
jgarnett: so if we have some hot shot with regular expressions
jgarnett: we should be able to get 90% of the library in one shot...
jgarnett: (I think we are now the only origional inccubation progject still going ...)
jgarnett: link is here:
jgarnett: we need a search replace for (C) ********, GeoTools Project Managment Committee (PMC)
jgarnett: (C)
, Open Source Geospatial Foundation (OSGeo)
jgarnett: (or whatever the syntax is...)
jgarnett: any takers?
SamHiatt: I might be able to help with that...
aaime: SamHiatt, do you have committer access?
aaime: (in any case, you could try to setup an ant script to do the rename and have someone else run it)
SamHiatt: However, I'm probably not the best candidate for the job at the moment...
aaime: I guess no one is better than the only candidate
jgarnett: note; we still have to review the result; but no sense working hard.
SamHiatt: Haha...
jgarnett: I think eclipse search and replace can handle it; I may try later.
jgarnett: SamHiatt can I email you if I fail?
SamHiatt: No, I am not a GT committer...
jgarnett: fair'nuff
desruisseaux: I may ask Cédric to help me on this one for the metadata, referencing and coverage module, and ask peoples if we run the script on other modules. But not right now...
SamHiatt: jgarnett:
jgarnett: thanks...
SamHiatt: sounds good...
jgarnett: 3) gt2-prefix removal
jgarnett: martin? I think ...
SamHiatt: I was planning on doing the same kind of thing.
SamHiatt:
desruisseaux: Cédric refreshed the gt2-prefix-removal branch.
desruisseaux: I think we are ready for a merge with trunk
jgarnett: so what does that actually mean? we need to change our maven dependencies in geoserver?
desruisseaux: But I probably need to remind what it is about
desruisseaux: and what would be the consequence.
desruisseaux: Yes, the Maven depencies would need to be updated.
aaime: what was the status for the eclipse users? (since that was the blocker last time)
desruisseaux: Eclipse can now include the version in the module name
jgarnett: I thought jdeolive found some magic setting.
jdeolive: yes
jdeolive: there is a property you can set to se the pattern to be used for the eclipse projection name
desruisseaux: This is not a very robust workaround, but it would work as long as Geotools's main doesn't have the same version number than Geoserver's main.
jdeolive: i think you could use the groupid to get around it
desruisseaux: Maybe, I don't know about the Eclipse plugin...
desruisseaux: (I'm on Netbeans)
desruisseaux: As a side note, GeoAPI site is now generated every day by Hudson (since last week).
desruisseaux: The goal of this gt2-prefix removal is to do the same with GeoTools
jdeolive: doing maven eclipse:eclipse -DprojectNameTemplate=[groupId].[artifactId]
jdeolive: woudl do it
jgarnett: so we would need to modify our developers guide instructions justin?
jgarnett: I will do so now...
desruisseaux: Thanks Justin
Sound like a much cleaner approach than version number.
jdeolive: np
aaime: which version of the eclipse plugin does support that?
jgarnett: (or can we bake that setting into the pom.xml ?)
jdeolive: yup
jdeolive: butn ot eveyone might want it through
aaime: (in gt2 we have declard version numbers, we're not using "lastest and greatest")
jdeolive: aaime: correct
jdeolive: i think we might need to upgrade the version of the eclipse plugin
jdeolive: which might mean people have to upgrade maven
jdeolive: what is the current receommended version?
aaime: release guide says 2.0.5
aaime: but I've been using 2.0.8 for the latest releases I made
SamHiatt: I'm on 2.0.8...
desruisseaux: I'm on 2.0.8 as well
desruisseaux: (linux box)
jgarnett: (updated)
jgarnett: I think everyone is actually using 2.0.8 now...but email asking if we could switch to 2.0.8 did not get any response.
SamHiatt: (sorry... maybe off topic, but... does anyone have a problem with the latest maven-surefire-plugin?)
jgarnett: aaime; if that is what you are using for release
aaime: yep
jgarnett: I will update the developers guide instructions now.
desruisseaux: (Samuel: no issue with surefire on my side lately)
SamHiatt: (I have to specify version 2.3.1 to prevent build failures)
SamHiatt: (oh, well...)
aaime: (SamHiatt, sometimes we see "failure to load main class from surefirexxx.jar" on the build server)
desruisseaux: So is there any objection about going ahead with the gt2- prefix removal in module name?
aaime: (but it's random)
aaime: not really... what was the advantage again?
SamHiatt: (Hmmm... thx)
aaime: some site generation issue right?
desruisseaux: When generation the web site with "mvn install", URL are broken if the module name doesn't match exactly the directory name.
desruisseaux: (typo: when generating...)
desruisseaux: (typo: with "mvn site")
aaime: right right
jgarnett: did you not have a question about the "xsd" group of modules?
desruisseaux: Yes
desruisseaux: Actually Justin gave his agreement a while ago, but we wanted to verify that it was still possible (maybe the situation has changed).
desruisseaux: The xsd child projects have name like "xml-core", which doesn't match the directory name because of the "xml" prefix.
CIA-31: jdeolive * r29832 /geotools/tags/2.4.2/ (97 files in 97 dirs): Changed version number to 2.4.2
desruisseaux: The proposal was to put those child modules in a "org.geotools.xml" groupID, and remove the "xml-" prefix from the artifactID.
jdeolive: desruisseaux: yeah i agreed with that
jdeolive: and still like the idea
desruisseaux: Just two details (would like to known your preference):
desruisseaux: groupID: "org.geotools.xml" or "org.geotools.xsd"?
desruisseaux: (since the parent module is "xsd"...à)
jdeolive: right... hmmm... no huge preferemce... i think xml is more logical... but then i think there would be a collision with teh old xml module
jdeolive: since the root pom would be org.geotools.xml and so would the old module in library
jdeolive: woudl it not?
desruisseaux: At your choice...
jdeolive: lets stick with xsd
desruisseaux: Okay
jdeolive: that way we dont have to change any modules names
jgarnett: sweet
desruisseaux: Second minor details: should we add a "xsd-" prefix in the JAR name?
desruisseaux: (would be: gt-lib-ext-...)
desruisseaux: sorry
desruisseaux: gt-lib-xsd-...
desruisseaux: (assuming the XSD modules are in "library", I don't remember)
jdeolive: nope, extension
desruisseaux: Thanks. Would be gt-ext-xsd...jar then
jdeolive: would this just be for release artifacts?
desruisseaux: This is just the JAR name
desruisseaux: The module name don't have any prefix.
desruisseaux: But we use Maven <finalName> construct for adding those prefix automatically in JAR names only.
jdeolive: is this the same name as the jar will have in teh local maven repo?
jdeolive: that will still be just artifactId-version.jar correct?
desruisseaux: Yes
jdeolive: cool
jdeolive: yeah i am fine with that
desruisseaux: Okay, we will go ahead with that then. Thanks!
jgarnett: 4) progress module
jdeolive: desruisseaux: suggestion
jgarnett: opps
desruisseaux: Yes?
jdeolive: we might want to clear out old artifacts in the online repositories, at RR and the one youg uys mirror
desruisseaux: Yes I agree
jdeolive: cool
ticheler [n=ticheler@87.1.7.2] entered the room.
CIA-31: jdeolive * r29833 /geotools/tags/2.4.2/README.html: Updated README file
jgarnett: may make checking out and building an older udig impossible..
desruisseaux: But it may be safe to wait a few weeks, until uDig and Geoserver (at least) updated their dependencies.
jgarnett: not sure that we care?
jdeolive: agreed
jdeolive: see mailing list, user confusion
jgarnett: we can update udig the moment you are done.
jdeolive: and our builds might keep kicking along not updating geotoosl aritfacts
jgarnett: okay ... moving on?
jgarnett: 4) progress module
jgarnett: gdavis you have the floor
SamHiatt: I ain't away!
gdavis: so the process module is committed under unsuported currently
gdavis: with the interfaces, etc, and currently one implemented process
gdavis: I guess I'm looking for any feedback on what's currently there
gdavis: if anyone wants to try making another process and see if they run into any walls or problems
gdavis: if not I will continue on
SamHiatt: What kinds of modules are in there?
gdavis: also, i will be making 2 new modules next
aaime: the new ones, and the old ones without a maintainer
gdavis: one for a wps client, and one to hold beans/bindings
gdavis: does anyone have any feedback about where the beans/bindings should live?
jgarnett: (the beans may exist in geoserver; which is currently providing a home to wfs 1.1 beans as I understand it?)
aaime: what binding framework are you going to use?
aaime: jgarnett, correct, thought it would be better to move all those bindings to gt2
gdavis: which is why we thought we should do that for this module from teh start
aaime: so you're using xml-xsd bindings?
gdavis: yes
gdavis: should I be making a new module just for the beans/bindings?
jgarnett: kudos to jdeolive on the documentation btw - it really helps
aaime: Ok. Remember to ask before creating those two modules (the procedure is alays the same)
aaime: (jgarnett, if someone wonders who's bombing lists.refractions.net, that's me doing the deploy)
jgarnett: understood; I can give an update on server hell after the meeting
gdavis: ok, I think I've done my spiel
jgarnett: anything else on the process side of things? I do wish Eclesia was here as having three clients to drive the api would really help my trust.
gdavis: i would welcome any feedback anyone has after looking over the current process api
jgarnett: okay moving on ...
desruisseaux: I can ask Eclesia tomorrow if he can contact you Jody
jgarnett: 5) postgrid in gt2
aaime: SamHiatt, this is yours
jgarnett: (thanks martin; it would really help; we will try and have a swing example for him)
SamHiatt: So I just wanted to quickly share my ideas for PG
SamHiatt: At the FOSS4G 07 Geomatys was the only group I found doing anything to organize and serve nD coverages...
SamHiatt: I would hope that by the time FOSS4G08 rolls around that we will have PostGrid somehow integrated into GT so that GT can boast of having an ND solution for Grid Coverages.
jgarnett: I got a couple of questions
SamHiatt: IFREMER, as well as my group, Ecocast, will have some cool stuff to show by then...
jgarnett: I have been watching this work take shape over a while ..
jgarnett: and I have it in my mind that I was going to check it out
jgarnett: when two things happened:
jgarnett: - some kind of geoapi interfaces were set up for me to review
jgarnett: - some kind of data access api was actually agreed on by simone and martin
jgarnett: Are either of these things in store? If not how do you expect to interrate with geotools?
desruisseaux: I can bring some more point here:
jgarnett: the current solution of client code making use of GridFormatReader scares me
desruisseaux: Jody has raised one of the issue that I see with postgrid integration with GT.
desruisseaux: The issues are:
SamHiatt: I should point out here that I don't know the details....
mcoudert [n=mcoudert@AAnnecy-256-1-12-49.w90-10.abo.wanadoo.fr] entered the room.
desruisseaux: - For now PostGrid avoid any use of GridFormatReader. It interact directly with ImageIO. I don't know it it is acceptable for GeoTools (for sure it is not an example of what to do).
desruisseaux: The plan was to refactor it when some kind of GridCoverageStore would be ready, but we are not yet there.
aaime: right right
aaime: we should find some time to define a GridCoverageStore indeed
aaime: I think everybody wants it
aaime: it's just that nobody seems to have time
SamHiatt: I would be interested in being involved in that discussion.
aaime: SamHiatt, everybody would
SamHiatt: Can you point me to any past discussions/wikis on the issue so I can get up to speed?
aaime: we need someone that takes the time to do the heavy lifting or
aaime: of setting up a proposal
desruisseaux: 2) Postgrid test suite is hard to setup. It could take a while before a postgrid integrated in GT has a test suite run at "mvn install" time.
aaime: creating a prototype implementation
aaime: desriusseaux, that would only mean it would take a while before it goes into supported land
jgarnett: simone already agreed to this approach:
simboss: I am planning to put ou something before the end of the month
SamHiatt: desruisseaux: I plan on fixing the test suite, at least for my case...
SamHiatt: perhaps I could help with that.
aaime: jgarnett, I like the general idea (based on WCS)
simboss: we are doing some work to support Eo time series for ESA
jgarnett: but was sad when it was not discussed and accepted at the same time as groldan's data access stuff.
aaime: but the names are scary
jgarnett: The names?
aaime: GridAccess... arg, reminds me of the dreaded DataAccess
jgarnett: (ah
jgarnett: I don't care about the names this instant
SamHiatt: this sounds cool.
jgarnett: more that developers can set aside some time to work together)
jgarnett: Honestly "Raster" rather than "Grid" may be better all around for everyone
aaime: moreover, there is no metadata access there, which is bad (we lakc the equivalent of DataStore.getSchema())
SamHiatt: I agree, jgarnett;
jgarnett: indeed; page is there to record what people think are a good idea.
simboss: well if samhiatt ha some time
aaime: Grid comes from the specs thought
simboss: has
simboss: he can start throwing some ideas up on the wili
simboss: wiki
SamHiatt: I'll read up on the issue and offer my input.
desruisseaux: Maybe because Raster is typically though as 2D while grid can be ND?
simboss: as a start
aaime: SamHiatt, you subscribed to the ml?
SamHiatt: I'll start throwing my ideas all over the wali.
jgarnett: cool; thanks guys - I am happy to review (I am so tired of this problem frustrating everyone)
aaime: (since it's a better place to discuss, wiki is better to store the results of the discussion)
jgarnett: martin had a second issue... where did it go?
desruisseaux: I'm here
SamHiatt: Ok, so "raster" isn't the best either.
desruisseaux: The second issue was test suite hard to setup.
SamHiatt: I like the idea of "ND Coverage" or something
aaime: (the 3d equivalent of raster being voxel)
sfarber [n=sfarber@88-149-177-23.static.ngi.it] entered the room.
jgarnett: any thoughts on that one martin?
SamHiatt: aaime, I think I am on the ml...
SamHiatt: but I don't have time to keep up with much of it.
aaime: nice
jgarnett: set up the database on the build server or something ... or boil all the setup steps into a java class people can run.
desruisseaux: A third issue is that current postgrid code still have strong links with its original purpose, which was to perform statistical analysis that may not be of general interrest. So some part would probably need to be trimmed down, but it may take a little while before it can be separated without breaking the statistical stuff.
aaime: SamHiatt, I just don't want the wiki page to degerate on a comment mess
SamHiatt: VoxelAccess?
jgarnett: martin you may be able to hide the stats stuff behind gdavis's process module.
jgarnett: (just a thought)
desruisseaux: Yes sure I should look at time, but the usual problem is that I'm slow...
zzorn [n=zzorn@212.16.103.33] entered the room.
zzorn_ [n=zzorn@212.16.103.33] entered the room.
jgarnett: so we are a few mins past meeting time; just a warning that we need to wrap up.
jgarnett: darn ...
jgarnett: SoC
zzorn left the room (quit: Read error: 104 (Connection reset by peer)).
jgarnett: deadline was today; there are lots of proposals for people to review this week.
SamHiatt: Cool...
jgarnett: Something we can take out to the mailing list this year; one email thread per proposal?
jgarnett: You are done SamHiatt?
SamHiatt: Yeah, sounds great!
jgarnett: okay thanks for the efficient meeting everyone. happy hacking.
jgarnett: I will post the logs.
aaime: thanks
SamHiatt: Thanks!
aaime: 2.4.2 is up on the repos
SamHiatt: Yay!
aaime: I'll do the rest of the release procedure tomorrow
-1) chatter as people arrive one hour early due to time switch
0) what is up
1) SoC deadline extended
2) svn cut off
3) IRC one hour earlier; motion passed!
4) WPS module, Process module
acuster: when's the meeting?
acuster: 1/2 hour or 1 1/2 hour?
jgarnett: 1.5 hours I think?
jgarnett: meeting expressed in GMT right?
jgarnett: GMT does not have DST to the best of my knowledge.
***acuster thought it was UMT that stayed fixed
jgarnett: not sure myself.
jgarnett: I will be online at both times
acuster: for you it should be the same time as it was last week
acuster: so 1.5 hours it is
jgarnett: okay
gdavis: jgarnett
gdavis: what is the URL to that WPS page?
jgarnett: finding it ...
jgarnett:
jgarnett: (it was not listed under proposals
jgarnett: as Eclesia was being shy)
jgarnett: I will move it now
jgarnett: since we are actually interested...
gdavis: thanks
jgarnett: It is not really written up as a "Proposal"
jgarnett: there is a specific template
jgarnett: with
jgarnett: - status - so we can see what the votes were
jgarnett: - tasks - so we can be sure people actually have the full amount of work accounted for (documentation and tests often get left out)
jgarnett: feel free to edit that page; fix the examples or whatever.
gdavis: ok
jgarnett: also comments are good.
jgarnett: Parameter.filter may be useful; so we can specify length( value ) < 255 for example
jgarnett: you may need to review ProgressListener to understand how this works; but you know Eclipse ProgressMonitor.
jgarnett: ()
gdavis: thnks
jgarnett: so a geoserver process engine
jgarnett: would make its own "ProgresListener"
jgarnett: and use that to stop the job
jgarnett: for figure out status.
jgarnett: basically a callback object.
gdavis: right
jgarnett: that keeps things from being insance when writing a your process.
jgarnett: we have adaptors in uDig
jgarnett: from eclipse ProgressMonitor to this ProgressListener api
acuster: jgarnett, did you get feedback from Tyler as to who he has (c) assignment forms for?
acuster: or are we working from intent?
groldan n=groldan@217.130.79.209 entered the room.
ralbertini n=ralberti@88-139-140-12.adslgp.cegetel.net entered the room.
ralbertini left the room.
jgarnett: hello
jgarnett: acuster
ralbert3 n=ralberti@88-139-140-12.adslgp.cegetel.net entered the room.
jgarnett: we are working from the internet page here ()
jgarnett: it contains a green
next to each person who has told me they are sending tyler a document.
jgarnett: that is enough to call the list this month.
jgarnett: Let's check with tyler after we have given people a chance to panic.
acuster: ok
jgarnett: I sent an email out; seems we are culling about half the accounts?
jgarnett: The good news is we have a bout 40 contributors that are active enough to a) respond and b) sign a document.
***acuster didn't realize udig was on 2.2
jgarnett: (oh wait that includes organizations; one green
per document sent to osgeo central)
acuster: that's a long way back
jgarnett: yeah.
jgarnett: well we could not keep up
jgarnett: given the QA on 2.3.
jgarnett: sad for the udig development community however.
acuster: looks like everyone is going to be asking you for help on udig
jgarnett: less time for geotools that is for sure.
desruisseaux n=chatzill@mtd203.teledetection.fr entered the room.
jgarnett: hi Martin
jgarnett: I wrote a Range class
jgarnett: but have not committed it yet.
jgarnett: :-D
aaime n=aaime@host146-41-dynamic.6-87-r.retail.telecomitalia.it entered the room.
desruisseaux: Hello Jody
jgarnett: thought I would give you a chance to panic first.
aaime: ?
jgarnett: I cannot tell if the meeting starts now; or in an hour.
jgarnett: Range is the only reason referencing depends on JAI
jgarnett: as such it keeps our referencing jars from being used in OpenJUMP (and indeed lots of places).
ralbert3 left the room (quit: Client Quit).
jgarnett: It was almost dropped for a server app at refractions (a geocoder) because of the JAI thing.
desruisseaux: I think you can commit you class. The next step would be to spot the place where there is explicit dependency to Range (not to a subclass like MeasurementRange)
jgarnett: martin has a bug report somewhere.
ralbertini n=ralberti@88-139-140-12.adslgp.cegetel.net entered the room.
jgarnett: understood; I am going to get myself happy with the test cases first.
desruisseaux: Thanks
jgarnett: Do you know if it actually occurs as a dependency in client code? The use of Range?
jgarnett: javax.jai.util.Range that is.
desruisseaux: Not sure - we really need to search. But I believe that most dependencies are toward NumberRange or MeasurementRange.
jgarnett: also if we don't use subtract; I may not implement it.
jgarnett: cool.
jgarnett: part of my motivation was the nice graph gabriel emailed about
jgarnett: from a gvSig presentation.
jgarnett:
jgarnett:
jgarnett: pretty silly graph
jgarnett: but reminds me that we can do a lot of good
jgarnett: if we can carve off referencing jars as a seperate download for others
jgarnett: (and thus start them on the road of using geotools)
jgarnett: so is the meeting now? or in an hour?
groldan: hi
groldan: I just assumed it is in an hour
groldan: though got alert just in case
jgarnett: cool
jgarnett: groldan the geoserver 1.7.0-alpha1 released
jgarnett: that was done today
jgarnett: was that done with GeoTools trunk?
groldan: since nobody said anything regarding it, it made sense the actual meeting time not to change
groldan: yup
jgarnett: sweet
jgarnett: I want to see if I can use that for udig walkthough 1
jgarnett: (since it should not lock up on shapefile layers)
groldan: go with my bless
jgarnett: heh; go with a QA run to see if it works
jgarnett: (a lot of the transaction stuff seems stuffed with geoserver later than 1.4.1; I can no longer parse the TransactionResponse documents!)
groldan: uDig was doing pretty well for the "normal" stuff last time I tried
groldan: besides the known broken bits
jgarnett: could not edit as per walkthough 2
jgarnett: did you try?
jgarnett: (shapefile editing woudl deadlock geoserver)
groldan: I tried some edits through geoserver->arcsde
jgarnett: that sounds promissing.
groldan: not sure about shp
jgarnett: any word on your arcsde hacking?
groldan:
jgarnett: (shp is a complete rewrite on geotools trunk now; needs lots of testing)
groldan: one thing for gt-wfs on trunk + udig
acuster: jgarnett, can you flesh out your 'required cleanup on gt trunk' email? It would serve as a good start to 2.5 final, no?
groldan: last change I did was to revert the wfs plugin to use 1.0 protocol as default
groldan: ping me if that doesnt work
jgarnett: confused; which email list... oh; what GeoTools 2.5 needs fixed for uDig to be happy
jgarnett: ah
jgarnett: it does not work; see GeoServerOnlineTest
jgarnett: when that passes uDig is happy with GeoServer.
jgarnett: You need a geoserver running on local host.
jgarnett: It now runs all the tests except the last one
jgarnett: looks like deleting a feature produces a bad TransactionResponse that cannot be parsed.
groldan: me is back to hack mode
***groldan forgot the /
acuster: martin has: review of work on referencing by Refractions and by the SoC student; changing module names to drop the gt2 prefix
acuster: prior to 2.5
aaime: prior to relasing 2.5 we need to get back the 2.4 performance
aaime: releasing a 3 times slower gt2 is not good advertising
desruisseaux: Which parts is slower?
aaime: the new feature model
acuster: jgarnett, how do we deal with cedric who is covered by geomatys but has not sent in a doc?
acuster: jgarnett, do I add him to your list anyhow?
desruisseaux: Actually Cédric as signed and sent the agreement.
desruisseaux: (typo: as --> has)
acuster: added;
jgarnett: back
jgarnett: as long as he has sent me an email
jgarnett: then I will place a
next to his name.
jgarnett: martin if you know he has sent the document can you send me an email?
***acuster is adding him
jgarnett: I did not do this stuff on the email list as people occasionally tag in their managers; or have company level concerns. I could also not use the admin list as managers and so on would not of been subscribed.
desruisseaux: Will ask Cédric to send the email tomorrow (just to be sure).
jgarnett: thanks muchly.
vheurteaux: jgarnett: document sended 2 weeks ago
vheurteaux: for Cédric
simboss: ciao all
simboss: I am a bit late
acuster: early
jgarnett: trying to catch up with the comments;
acuster: jody: martin is sending you an email
jgarnett: simboss you are a bit early.
simboss: early? then I can leave and come back late
jgarnett: indeed.
simboss: forgot about the time change
aaime: jgarnett, is it holiday in Canada?
aaime: (today?)
jgarnett: nope
jgarnett: I think we just have a DST thing
jgarnett: for europe right?
acuster: EST
jgarnett: today is not a holiday.
acuster: or some such
acuster:
aaime: jgarnett, yes, we switched to daylight savings yesterday
jgarnett: so the question is
jgarnett: or was
jgarnett: does our meeting time switch? and the answer I think was no ...
jgarnett: I am interested in martins feedback on referencing factory stuff;
jgarnett: SoC was given until April 7th for students.
desruisseaux: I have not yet looking at referencing things, but it is something I would like to do before the 2.5 release if it can fit in the timeframe...
desruisseaux: (typo: not yet looked...)
jgarnett: understood
jgarnett: yeah it would be good to sort that our for the 2.5 timeframe.
jgarnett: thnk it is already marked as technical debt?
desruisseaux: Could you remind me the URL for technical debts please?
desruisseaux: (searching right now just in case I remind it by myself...)
jgarnett:
desruisseaux: Thanks
jgarnett: nope
jgarnett: not listed; I was focused on stuff that had nobody paying attention.
jgarnett: You at least knew about the referencing factory stuff.
jgarnett: acuster: I have updated this page -
desruisseaux: Thanks
Editing the page...
jgarnett: to list the stuff for uDig acceptance.
desruisseaux: (as a matter of principle)
jgarnett: just a sec I will list the udig acceptance things in order of priority.
desruisseaux: No problem, I will wait
ralbertini left the room (quit: Read error: 110 (Connection timed out)).
CIA-24: dadler 2.4.x * r29745 geotools/modules/unsupported/db2/src/ (13 files in 3 dirs): GEOT-1753 - use WKB
jgarnett: oh not to worry martin; I was editing the 2.5.x page for acuster
jgarnett: and I am done now; except the site seems slow.
jgarnett: note cedric was already listed with a
next to his name...
jgarnett: back in a bit; going to grab food before the meeting (yum!)
jgarnett: acuster I am happy with the 2.5.x page now; it shows what is needed for uDig to "be happy" with 2.5.x
ticheler n=ticheler@87.1.30.71 entered the room.
jgarnett: lots of people today.
acuster has changed the topic to: IRC Meeting – 0) What is up
jgarnett: meeting time? agenda topics etc...
acuster has changed the topic to: IRC Meeting 31 March 2008 – 0) What is up
jgarnett has changed the topic to: IRC Meeting – 0) What is up 1) SoC deadline extended
jgarnett has changed the topic to: IRC Meeting – 0) What is up 1) SoC deadline extended 2) svn cut off
desruisseaux: Topic: can we move the IRC time one hour earlier?
desruisseaux has changed the topic to: IRC Meeting – 0) What is up 1) SoC deadline extended 2) svn cut off 3) IRC one hour earlier?
acuster: clap, clap, clap martin. Yep, that's how you set the topic
desruisseaux: Thanks Adrian
desruisseaux: (Adrian just learned me /help as well
My knowledge of IRC is minimalist...)
jgarnett: gdavis ping?
jgarnett: well it looks like it is time to start?
gdavis: hi
jgarnett: gdavis was going to talk abit about WPS / Process stuff; but he does not seem to be at his computer.
gdavis: im here
jgarnett: oh; you are here - want to grab a topc?
gdavis: should I just edit the title myself?
gabb n=groldan@217.130.79.209 entered the room.
jgarnett: yep
groldan left the room (quit: Nick collision from services.).
gabb is now known as groldan
gdavis: !topic
gdavis: er
gdavis: how do I do that?
jgarnett: type in your window 4) xxxx
jgarnett: and I will add it
gdavis: 4) discuss WPS module
gdavis: uh
jgarnett has changed the topic to: IRC Meeting – 0) What is up 1) SoC deadline extended 2) svn cut off 3) IRC one hour earlier 4) WPS module
jgarnett: okay lets start
jgarnett: 0) what is up
desruisseaux: Martin: works on postgrid...
acuster: acuster — reviewing Name/NameImpl;GenericName;QName; trying to get back to feature
jgarnett: jgarnett; I am bashing my head against uDig trunk (as such I am fixing things randomly as they stop me in geotools trunk; thanks for everyone for putting up with a raft of filter changes last week). I have also identified javax.jai.util.Range as the only reason referencing needs JAI - as such I am writing up a replacement so we can carve referencing off as a seperate download for JTS powered projects.
gdavis: gdavis: planning new WPS/Process modules
***aaime working on some commercial stuff for TOPP
***groldan have been fixing bugs for 1.6.3, no gt/geoserver today yet though
jgarnett: cool
jgarnett: 1) SoC Deadline
jgarnett: has been extended; mentors visit the page and see if any geotools projects have come in I guess.
jgarnett: I had a paniced student asking about GeoIRC on the weekend.
jgarnett: anyone else?
desruisseaux: Not on my side
jgarnett: ()
jgarnett: wow there are lots more available today
jgarnett: (last week there was like one)
jgarnett: I think students have been given another week; shall we discuss these on the email list as we find them?
jgarnett: unless there are any students here who would like to say hi.
jgarnett: I see a KML reader for GeoTools
jgarnett: H2 spatial index
jgarnett: Raster polygonizer for GeoTools
jgarnett: and some udig / geoserver specific stuff.
jgarnett: moving on ...
jgarnett: 2) svn cut off
jgarnett: today is the day ... we have 40 people left after the cut off. If your name is not on the list send me email.
jgarnett: 35 people will be cut; most of them are SF Ids that have not been heard of in years.
jgarnett: List is here:
acuster: thanks for all the hard work jody
jgarnett: For those of you with new employees; team members; ideas for new modules etc... there are a few extra steps to getting commit access (most important is sending a contributors agreement away to OSGeo central).
jgarnett: acuster did we ever update the developers guide to your satisfaction?
jgarnett: I also wanted to ask if you had talked to Bryce about Public Domain code...
acuster: ?
acuster: no, but I'll talk to him; either way we are in the clear since the code is in the public domain
jgarnett: Developersg Guide; instructions on how we run our project; need to make sure the GeoTools Contributors Agreement is covered.
jgarnett: understood.
acuster: ah
acuster: I'll look into that
jgarnett: This page:
jgarnett: seems to have it ...
jgarnett: any other comments? I won't even touch updating our headers this week :-D
jgarnett: thanks acuster.
jgarnett: 3) IRC one hour earlier
jgarnett: This one is you martin.
aaime: me too!
aaime: I'd like to move the meeting one hour earlier me too, if possible
desruisseaux: Given the change in European time, would peoples agree about an IRC one hour later?
jgarnett: can we leave this thing at some kind of fixed time; and not have to change due to DST?
desruisseaux: (so it stay at the same hour for european)
jgarnett: I will agree; but I would like it to remain a fixed time...
simboss: +2
groldan: +1
simboss:
jgarnett: I understand this is more of an issue for europe since this is around supper time.
jgarnett: is there a time that would work for you guys; even when the clock switches back?
desruisseaux: It is 22h00 in France.
desruisseaux: (22h35 right now)
aaime: yep
jgarnett: So in the fall; if it is 20:35 in france; would that be okay?
jgarnett: ie we take it back an hour now
jgarnett: and then DST kicks in and kicks it back another hour.
aaime: not for me
jgarnett: (just a question)
desruisseaux: Earlier would be nice, but I though that we were late in order to allow Australian to assist (I believe it is something like 6h00 AM for them?) Do we still have Australian around?
jgarnett: I don't really want to cycle the meeting time twice a year like we have been doing.
jgarnett: mleslie is in australia now
jgarnett: mleslie ping?
aaime: jgarnett, why? I don't see the issue
aaime: it just happens twice a year?
jgarnett: actually 4 times a year; since we have different days for switching in NA and EU now
jgarnett: and the topic comes up as each group is confused.
jgarnett: so I can vote +0 on this one; and +0 again in the fall.
aaime: I would not find it less confusing if
jgarnett: just wanted to see if there was a time that would work year round.
acuster: right now we have 8:00 UTC/GMT
aaime: we used a fixed gmt time
acuster: or I should say 20:00
aaime: anyways, +1 on moving the meeting one hour earlier
jgarnett: that is what the developers guide says
jgarnett:
jgarnett:
aaime: jgarnett, I've found that site to lie lately
desruisseaux: I'm +1 for our hour earlier of course. Anyone know if we have Australian around to ask for?
aaime: wrong DST for USA afaik or something like that
jgarnett: mleslie but he is asleep it seems. so no awake australians.
jgarnett: so question
aaime: Australia should be moving to solar time soon afaik
jgarnett: looks like the vote is carried.
aaime: (if it did not already do that)
jgarnett: do we need to update the developers guide page above? or is it already correct.
aaime: (it's the beginning of fall there)
acuster: the guide needs to change, the link should die (as a live link)
acuster: if the time moved, that is
jgarnett: okay so vote is done; motion carried etc... can someone volunteer to update the page
jgarnett: (to whatever link is needed)
jgarnett: 4) WPS module
jgarnett: gdavis this one is yours...
gdavis: ok, so I am currently planning to create 2 new modules for WPS related work. The initial idea is outlined by Jody here:
aaime: wps module in geotools? client?
gdavis: I plan to create one module called process for a simple Process API to define processes. A second module called wps will use this new module to build up support for the WPS spec.
jgarnett: (actually that idea is for a process api; Eclisia is the origional guy)
gdavis: ok
gdavis: the process api is basically interfaces for creating processes, not necessarily wps specific
gdavis: the wps module will use this
jgarnett: anyone know where Eclesia is?
aaime: not sure I understand why there is a wps specific module in geotools, unless you're plannign to make a wps client?
gdavis: i will be making a wps client, in udig
groldan: I guess the idea is to produce a process framework in geotools, that a geoserver plugin service could wrap as a wps?
jgarnett: correct aaime; same style as wms client code (no abstraction)
gdavis: right
aaime: and the same module would be used by a geoserver wps service?
gdavis: yes
aaime: wow, that's a first, should be interesting to look at
jgarnett: and the "wps" client code would be used by geoserver when chaining
gdavis: anyways, these are the first steps towards a fully working wps service
aaime: (a module used both by clinet and server side that's not a datastore, that is)
acuster: [ meeting time updated on web page ]
jgarnett: ah; you guys never did WMS chaining?
aaime: thanks a lot acuster
jgarnett: (like cubewerxs?)
aaime: WMS cascading you mean?
jgarnett: yeah; forgot what they called it.
aaime: We don't have the concept of a WMS layer in geoserver configuration
aaime: so no
jgarnett: okay cool.
aaime: it's not a feature type
aaime: and not strictly a coverage either
aaime: we would need to develop a whole new set of config classes and UI
jgarnett: so gdavis; you need a thumbs up from a geoserver PMC memeber to start.
jgarnett: and a wiki page or something for the module you are building
jgarnett: (same thing you did for unsupported/geometry)
gdavis: geoserver wiki page?
CIA-24: desruisseaux * r29746 geotools/gt/modules/library/coverage/src/main/java/org/geotools/coverage/grid/GridGeometry2D.java: Removed the getGridSize2D() method in order to keep the API simplier - this somewhat trivial method doesn't seem to worth its weight.
aaime: wow, wait, for the gt2 modules he needs a gt2 pmc member
jgarnett: do you want to send an email to geotools-devel when you got the wiki page set up?
jgarnett: bleck
jgarnett: sorry
aaime: the psc geoserver member is neended to make up a new geoserver community module
gdavis: yes. I will do that
jgarnett: aaime what is the procedure to start a geoserver communit module? do we need a formal GSIP? or just a thumbs up ...
aaime: just a thumbs up
jgarnett: (sorry if this is off topic for todays meeting)
aaime: you'll need a gsip to turn that into an official service
jgarnett: okay got it.
aaime: yet, better talk openly and widely
aaime: before that
jgarnett: for geotools you need to meet a bunch of QA hoops.
aaime: otherwise the gsip may turn into a nightmare
aaime: if the psc does not accept it
jgarnett: we have all seen it happen.
jgarnett: okay any other questions gdavis?
jgarnett: We are looking forward to this work.
gdavis: nope, i will get to work on the wiki page and then email the list
jgarnett: sweet.
aaime: gdavis, in all you're doing keep in mind a simple thing
aaime: you're dong the work and we're grateful
gdavis: yes, the geotools side will be quite simple
aaime: but someone else will have to maintain it long term
gdavis: (geoserver too)
jgarnett: I feel I should ask a seperate PMC member to review your wiki page etc; since I was involved in scoping out the origional collaboration between uDig and PuzzleGIS.
aaime: so you have to try and please those someone elses too
acuster: lol, 'quite simple'
acuster: it's harder than that
jgarnett: (note for the project scope page - - we have making the geotools api "simple" as a top priority)
aaime: usual application of the 20/80 rule
acuster: ah, good; as long as you are not expecting doing that to be simple
gdavis: ok, I will send off a list email soon with a wiki page you can look at
gdavis: thanks guys
aaime: np
acuster: we done?
acuster: wohoo!
aaime: yap
jgarnett: I can send the logs
Agenda:
- what is up
- SoC
- GeoTools contirbutor agreement
- arcsde datastore command design
<jgarnett> How are you andrea?
<aaime> it's holiday in Italy, not sure about other countries
<aaime> sick, my throat hurts
<jgarnett> is it warm enough for you to play tennis these days?
<aaime> and sick, my hope in Struts2 was demolished by a couple of days investigation
<jgarnett> We just had a storm; so the sky is scrubbed clean
<aaime> jgarnett, it is all year, I play indoor during winter
<jgarnett> that is interesting; the struts 2 part ... were they not bold enough?
<sfarber> yo, meeting time?
<aaime> jgarnett, not really, it's just that there is no tool support
<aaime> would you write java code with notepad?
<jgarnett> you had to do struts 1 in notepad.
<jgarnett> but I agree it is nobodies preference.
<aaime> well, jsp alternatives (freemarker) lack editors that can do html and the templating at the same time
<jgarnett> did you ever try the eclipse Web Tools plug-ins ?
<aaime> I have it installed, but we need to move away from jsp in order to have a pluggable UI
<jgarnett> I have not looked into struts 2 enough to know what they did different this time around.
<jgarnett> hrm
<jgarnett> well lets start the meeting...
<jgarnett> floor is open for agenda topics.
<aaime> I cannot think of any
<jgarnett> Saul I wanted to chat with you or gabriel to see what was going on w/ arcsde datastore; is gabriel realling moving to a command based system like WFSDatastore?
<aaime> dunno
<jgarnett> I was working at the same time as him and kept tripping up.
<jgarnett> hard to know with out email from him...
<aaime> 3...2...1
<aaime> ...0
<jgarnett> 1) what is up
- aaime sick of java web frameworks (other than Wicket)
<jgarnett> jgarnett - enjoying the holiday; looking into ESRI Web 2.0 about face; thinking about finishing up the user guide if we have some SoC students to read it.
- groldan enjoying holiday too
<jgarnett> okay moving on ...
<jgarnett> 1) SoC
<jgarnett> Simone was nice enough to get us a page going
<jgarnett> I wonder if there are any students here today?
<jgarnett> So far the difference for me has been a bunch of private emails asking how to install geotools.
<jgarnett> Anyone else run into a SoC student?
<aaime> not me
<groldan> nope
<jgarnett>
<sfarber> jgarnett, sure thing. Let's chat after meeting?
<jgarnett> If last time is any indication we won't get much until the last couple hours before the deadline; If I can ask everyone to mind the user list this week and do not hesitate to fix any of the documentation.
<jgarnett> this will be a quick meeting
<jgarnett> 2) svn cut off
<jgarnett> I have gotten emails returned by most of the active developer community now
<aaime> mail sent, hope it gets to destination
<jgarnett> and surprisingly a lot of the inactive devel community. It has been really nice to hear how people are doing in their various paths in in life.
<aaime> any interesting story?
<jgarnett> So the good news is we will still have enough developers to continue after the svn cut off :-D
<jgarnett> the list is here
<jgarnett>
<bullenh> uhh..
<jgarnett> oops
<jgarnett>
<jgarnett> yeah!
<jgarnett> (sorry new firefox beta is occasionally tripping up)
<jgarnett> It was nice to hear that RobA is enjoyig his new position; that Sanjay still exists and so on...
<aaime> (new firefox beta is working a lot better for me than the old stable
)
<jgarnett> I also got a lot of feedback from those that are really happy about the OSGeo direction; excited that they will find an easier time getting funding for GeoTools work etc...
<bullenh> (I really find the new gui much better in it)
<jgarnett> the kind of thing that we considered when we joined; but have since lost excitement over.
<jgarnett> So next Monday we will try and restrict svn access; and then we can update our headers.
<desruisseaux> Thanks for doing that Jody!
<desruisseaux> (I means contacting all those peoples)
<jgarnett> well thanks for waiting; feel I delayed everything a bit doing it myself.
<jgarnett> a mistake I won't repeat with the headers
<jgarnett> So that may be it for the meeting ...
<jgarnett> shall I post the logs; or just open up the floor for a general chat?
<desruisseaux> I can not stay toonight on my side...
<sfarber> jgarnett: you've got gabriel and me here...wanna talk arcsde?
<desruisseaux> Bye all
<jgarnett> well thanks for staying thus far.
<jgarnett> sure
<aaime> bye
<groldan> sure
<jgarnett> gabriel what are your plans for arcsde?
=-= jgarnett has changed the topic to "0) what is up 1) SoC 2) One week till svn cut off 3) arcsde chat"
<groldan> right now it should be editing versioned tables
<groldan> next step is
<groldan> to improve the threading handling a big deal
<sfarber> from my end the current list is:
<groldan> since the current approach is error prone and hard to maintain
<sfarber> * support lots more arcsde raster types
<sfarber> (float, double, colormapped 1-byte, non-colormapped 1-byte, etc)
<groldan> the general idea is to control the threads sde operations are run at, in order to be able to execute more granular units of work concurrently
<groldan> instead of locking a connection since a FC iterator is open until its closed
<groldan> with that, we're joy
<sfarber> * re-do the raster test suite in junit 4 style and taking a cue from gabriel so that the test load their own data.
<sfarber> that's all from me: groldan:
<jgarnett> I have not looked at JUnit 4 yet; it seems Andrea was all gung-ho to switch.
<groldan> what do you gain redoing in junit4?
<jgarnett> the work I was trying to do gabriel may be related to what you are doing. gabriel.
<jgarnett> I was trying to put arcsde datastore on a diet
<sfarber> with the connection locking are you proposing to override the SeStreamOp locking methods?
<jgarnett> and make it still work when only one thread is available.
<aaime> jgarnett, switch to junit4 completed
<groldan> jgarnett: indeed, what you're trying to achiece may only be possible with something like that
<aaime> you can use it in your modules
<aaime> (just needed a change of version in 2 dependencies)
<jgarnett> so I was capturing this as a subclass of arcsdedatastore; and then look at all the little methods and make the read-only ones reuse the existing connection
<jgarnett> aaime is the junit4 something you want to just do for the whole library? Or is it more serious than that.
<groldan> sfarber: looking for sestreamop lock methods...
<aaime> we already have it for the whole library
<aaime> junit4 contains junit3 classes as well
<sfarber> ok, so the thread synchronization is going to be something that's enforced at the gt2-arcsde code level?
<aaime> you can use both at the same time
<groldan> what locking methods are those?
<sfarber> groldan: one sec...let me get a little bit out and I think it'll be clear
<jgarnett> (gabriel I agree; I think I should wait until after your work is done? Or at least review your design rather than guess. For a moment I was thinking you were doing something like the WFSDataStore transaction state diff where it keeps a list of "commands" and then applies them on the result as it is sreamed...
<jgarnett> Do you have any design stuff we can read? Or just read the code ...
<groldan> jgarnett: yes, it'd be of help to get you aborad to get to use a single connection, once that's possible
<sfarber> so, groldan, you're proposing that everyone writing code at the gt2-arcsde datastore code level use a set of methods internal to the the datastore code to coordinate and use connections correctly?
<groldan> nope
<groldan> that's all arcsde plugin business
<groldan> the thing is as follows (in my understanding):
<groldan> -sde connections are not thread safe, yet a single thread can run various operations over it, which's what jgarnet needs
<groldan> -we're handling transactions with sde native transactions, ie, held in the connection
<groldan> so any transactional operation plus queries over a FeatureStore need to use the same connection
<groldan> and often they're called concurrently
<groldan> so the plan is to run all the operations over a connection in a single thread, which will allow
<groldan> for a more granular locking
<groldan> and hopefully the thing is gonna be easy with the java.util.concurrent.Executors framwork
<jgarnett> thinking
<jgarnett> single thread per Transaction you mean? A little command queue for each thread ...
<groldan> yup
<jgarnett> okay I am starting to get it
<groldan> that's the only way I can see to avoid the performance overhead and maintainance nightmare right now
<jgarnett> so I wasted a couple of days last week
But now at least I can try and plan to take part later on ... provided you can tell me when that is?
<sfarber> groldan:
<groldan> I'm waiting for cholmes' go ahead, once I get it it'll take three work days
<groldan> sfarber?
<jgarnett> gabriel / saul should we of asked for a proposal / design doc for this?
<groldan> hmm... I guess not?
<jgarnett> okay; if it will help I can be available for the QA side of things at the end of your 3 days.
<groldan> sounds good, sure
<sfarber> ok groldan, so how will this affect someone calling "ArcSDEPooledConnection.getConnection()" a lot
<groldan> gonna keep communication open on the mailing list
<groldan> you wondering how will it affect the gce side of the fence?
<groldan> which reminds me, Saul, we need to figure out what to do about DataStore.dispose()
<jgarnett> if I can guess; it would be come ArcSDEPooledConnection.executeWithConnection( new ConnectionRunnable(){
<jgarnett> void run( Connection )Unknown macro: { <jgarnett> ... <jgarnett> }<jgarnett> );
<jgarnett> or something like that ...
<groldan> something like that
<groldan> ideally ArcSDEPooledConnection should stop extending SeConnection
<sfarber> Err, can you be really specific? I'm not really so worried about the gce stuff (I mean, que sera, sera) but I'm just curious about what the nature of getting/executing connections will be and how it will change.
<groldan> and provide a ArcSDEPooledConnection.execute(ArcSDECommand);
<groldan> interface ArcSDECommandUnknown macro: { void execute(SeConnection conn);}<groldan> as to speak
<sfarber> ahh. Ok, just like HibernateTemplate in spring?
<groldan> extending seconnection is something that may be worth to stop doing, as it'd simplify unit testing
<groldan> I mean, unit testing
<groldan> dunno about HibernateTemplate in Spring, Saul
<sfarber> So what, specifically, does this solve? How does it solve the "featureiterator holds open connection" problem?
<groldan> but as long as it starts to make sense for you, I'm glad
<sfarber> yeah, it makes sense.
<groldan> it solves the featureiterator problem
<groldan> but phrase it like: "featureiterator holds lock over connection" problem, instead
<groldan> think of the following:
<groldan> uDig is our best test bench for this
<groldan> you have 5 sde layers in uDig
<groldan> edit one
<groldan> udig sets the same Transaction for the 5 FeatureStores
<groldan> the ArcTransactionState grabs a connection and holds it, since the connection manages the native transaction
<groldan> udig calls one addFeatures and 5 (or 6) getFeatures, concurrently
<groldan> right now, the addFeatures operation is an atomic operation
<groldan> but querying the 5 layers and iterating over them are not
<groldan> each iteration consists of a number of atomic operations, say, every next()
<groldan> but right now we're forced to wait until one iterator is fully traversed until another one is open
<groldan> with this change, we can still serve the 5 iterators concurrently, or interleaved
<groldan> since we can call a fetch command on each next() invocation
<groldan> rather than grabbing the lock when opening the iterator and releasing it when closing it
<jgarnett> understood; that is why I wanted this to settle down before I try the "single connection mode" - since may of the "get metadata" requests I would like to handle as atomic operations.
<groldan> makes sense?
<sfarber> groldan: I'm still a bit confused. In the end you open a SeQuery against an SeConnection object, and until you've closed that SeOuery you can't use that SeConnection for anything else, right?
<groldan> not right
<jgarnett> so if you can leave some kind of .exectureReadonly( ... ) method in there it will help; since I can use a connection regardless of if a transaction has been started or not.
<sfarber> ahh, ok.
<groldan> the problem is you have to do that in the same thread
<groldan> so the queue per thread commands
<jgarnett> guys ping me when done and I will post the longs
<sfarber> So you're saying that in the same thread (but ONLY in the same thread) you can open a number of SeQuery objects against ONE SeConnection object and read from each query out of order and it will work?
<groldan> if get two concurrent getFeature requests and try to execute the SeQuery from the calling threads, you'll get a SeException
<groldan> I mean, while an SeStreamOp is open by one thread, no other thread can open an SeStreamOp
<groldan> BUT a single thread can have up till 24 SeStreamOp open at same time
<sfarber> So the lock policies listed at
<groldan> (24 is a server side configurable thing afaik)
<sfarber> aren't really relevant? We can't just set SE_UNPROTECTED_POLICY and let ArcSDE trust us that we won't do anything too screwey?
<sfarber> yup, I've definitely run across the streams/connection setting before.
<groldan> about the policies
<groldan> I guesss they don't solve our problem
<groldan> trylock fails if another thread is using the connection
<groldan> lock choses a random waiting one
<groldan> we need a predictable queue
<groldan> but I might be convinced otherwise
<groldan> yet, one way or the other, the change applies in that it isolates this problem in a single point, rather than being a concern to deal with every time you use a connection?
<sfarber> sure, ok that makes sense.
<sfarber> So, in the end, the change is something like this:
<sfarber> OLD WAY:
<sfarber> ArcSDEPooledConnection con = pool.getConnection();
<sfarber> //do stuff with con
<sfarber> con.close();
<sfarber> wait...
<sfarber> I missed an important parT!
<sfarber> OLD WAY:
<sfarber> ArcSDEPooledConnection con = pool.getConnection();
<sfarber> con.lock();
<sfarber> SeQuery q = new SeQuery(con);
<sfarber> q.execute();
<sfarber> con.getLock().release(0;
<sfarber> con.lock();
<sfarber> q.close();
<sfarber> con.unlock();
<sfarber> con.close();
<sfarber> and now the NEW WAY:
<sfarber> // doing something in a method...
<sfarber> ArcSDEPooledConnection con = pool.getConnection();
<sfarber> con.execute( new ArcSDECommand()Unknown macro: { <sfarber> public void execute(SeConnection con)<groldan> conn = pool.getConnection();Unknown macro: { <sfarber> SeQuery q = new SeQuery(con); <sfarber> q.execute(); <sfarber> //blah blah <groldan> more or less <sfarber> q.close(); <sfarber> }<groldan> I'd better say}
<jgarnett> ie this is what they mean when they say "closure" for Java; the same reason I keep trying to use FeatureVisitor
<jgarnett> a couple more things gabriel
<jgarnett> use an abstract super class for your ArcSDECommand
<jgarnett> and provide methods to create SeQuery etc...
<groldan> ah, sorry, you talking about SeQuery, me thinking of ArcSDEQuery
<groldan> yes, that's right Saul
<jgarnett> that way you can clean them up after.
<jgarnett> less client code == better
<groldan> yup
<sfarber> so the implementation of con.execute() is that it can "store up" these ArcSDECommand objects and run them out of order, or synchronize them in a one-thread-per-actual-connection implementation?
<groldan> if I were able to provide a full facade to arcsde that's be ideal, but wont have that much time I guess
<groldan> sfarber: not planning to run them out of order
<groldan> but in the invocation order, as seen by each calling thread
<jgarnett> gabriel you are still going to have "fun" when having several queues going on different "Transactions" right?
<sfarber> ok, makes sene. This is mostly purely to solve the problem of connection locking and getting all the commands executed in one thread, rather than every bit of "direct-to-sde" client code running at the SeConnection synchronization stuff willy-nilly.
<groldan> different transactions means different connections
<jgarnett> are you thinking of a single queue? ie to preserve order across transactions (ie why bother) or multiple queues one per transaction.
<groldan> since the transaction state holds the connection
<groldan> one per transaction, the connection handles the native transaction, a geotools transaction is tied to a sde transaction throught the transaction state
<groldan> GTTransaction-->ArcTransactionState->ArcSdePooledConnection-->command queue
<jgarnett> understood
<jgarnett> so I get where you are going gabriel
<jgarnett> saul how are you doing?
<sfarber> eh, I'm still a bit fuzzy on how ArcTransactionState relates to arcsde native transactions, but I'm not too worried about it. I'm happy about the command-based callback system.
<jgarnett> gabriel I am thinking hard about the single case now
<jgarnett> basically set up a datastore with a single command queue
<jgarnett> and have two kinds of commands; one for read only and one for read-write
<sfarber> However, it's going to take a lot of re-coding! It'll take me a while to get everything back and working after the change.
<jgarnett> and use that to figure out when to schedule them
<jgarnett> it may just be a matter of building up the queues; and then letting them "go" as they have a connection made available.
<groldan> you mean for the maxConnections=1 case?
<jgarnett> yes I do
<groldan> well we have a timeout parameter, how long to wait for an available connection
<jgarnett> It is almost like these things are comming in with Transaction.ANY
<jgarnett> I want them schedule on whomever is "next"; possibly budging in line since they do not represent modifications.
<groldan> the thing would be, to make transactions as short lived as possible
<groldan> hmmm I feel like in a trap here...
<sfarber> hey guys, I gotta bail. I'm glad we talked this through. It would have been a shock otherwise!
<sfarber> catch you all later.
<groldan> ok
<groldan> you want udig running with a single connection
<groldan> but I wonder how does that play with the way udig handles transactions
<groldan> lets see.. (just thinking)
<groldan> if all the sde layers have the same transaction set (which is what happens right now I guess?)
<groldan> we should be gold
<groldan> then, say there's no transaction in course... and you hit refresh
<groldan> the sde will get N concurrent getFeature requests, with Transaction.AUTO_COMMIT
<groldan> that means they're free to ask for their own connections
<groldan> so
<jgarnett> hi gabriel; sorry was distracted building 2.2.x - I don't mean you to be trapped; and this goal may not be possible.
<jgarnett> in udig all the arcsde layers for a map use the same transaction.
<jgarnett> indeed all the arcsde layers ever should use Transaction.AUTO_COMMIT until they have need to modify data.
<groldan> no, I don't say trapped in a bad sense
<groldan> it should be possible indeed
<groldan> I'm just trying to figure out the exact scenario
<groldan> so, in in autocommit mode and the pool has a single connection
<groldan> you're tied to have enough luck as the pool.timeOut parameter is high enough as to allow serializing all the getFeature requests
<groldan> makes sense?
<jgarnett> correct; single connection until you start editing
<jgarnett> after that single connection for the Map
<jgarnett> (but I still would like all the "book keeping" commands to go out on that connection .... even though right now they are AUTO_COMMIT
<jgarnett> they really should be DO_NOT_CARE)
<jgarnett> and then if the user opens a second map
<jgarnett> that is when they are stuck; and run out of connections.
<jgarnett> for my customer it is more important that the number of connections be low - and 1 if possible.
<jgarnett> right now I think I am at 1 for AUTO_COMMIT, and 1 per map
<groldan> mean you're being able to do that?
<groldan> and your customer should consider using WFS to keep the connection count under control for multiple users
<groldan> (kidding, I know we already talked about that)
<jgarnett> heh
<jgarnett> well make geoserver faster!
<jgarnett> okay let me wrap up the logs; thanks for the chat.
<groldan> thanks you
<jgarnett> (did you understand what I was thinking you were up to? keep the queue of commands in memory and use it to postporcess the feature reader)
<jgarnett> and then execute the entire set of modifications in one go
<jgarnett> like wfsdatastore does.
<groldan> I understand, but I'm not going to have the energy to do that I guess
<jgarnett> fair enough!
<jgarnett> enjoy the rest of your holiday
<groldan> ditto
<groldan> (though something tells me that could be a sort of stratagy object since the execution is gonna be isolated to a single point?)
<jgarnett> better yet; subclass of DataStore
<jgarnett> DataStoreFactory makes the decision
<jgarnett> (I already started that side of it; but I will now delay further work until I hear from you)
<jgarnett> consider it a BIG stratagy object.
<groldan> and sets one or other executor, using composition instead of hineritance
<jgarnett> either works ... I was just trying to poke fun at using stratagy everywhere; one of the main feedbacks on uDig is that code should "make up its mind" rather than delegate out responsibility all the time.
<groldan> ok, I'll try to keep your goal in mind for the design, though won't be able to enforce your use case
<jgarnett> make it easier to follow / debug.
<jgarnett> good good
Agenda:
- what is up
- svn cut off
- SoC
jdeolive: is it meeting time?
aaime n=aaime@87.11.55.223 entered the room.
simboss: I thinjk so
simboss: jdeolive
jdeolive: hi simboss
simboss: ciao justin
jdeolive: cool, ping aaime , jgarnett , dwinslow , desruisseaux , vheurteaux
aaime: Hi
dwinslow: hi
jdeolive: anyone have any agenda items?
vheurteaux: hi
desruisseaux: No agenda item on my side
jdeolive: jgarnett?
***jdeolive thinks this might turn out to be a short meeting
desruisseaux: I have topic for some future IRC (module renaming, svn cleanup...), but it is for an undetermined future since I'm not yet free for those tasks...
jgarnett: hello
jgarnett: back now
jdeolive: hi jody
jdeolive: do you have any agenda items?
jgarnett: thinking
jgarnett: Eclesia cannot make it today
jgarnett: was asking about the process proposal
jgarnett: I think he has enough to start on; some WPS work may revise the interfaces he defines later
jgarnett: but that can be later.
jgarnett: So if he actually puts it to the vote (say via email) then I am pretty happy with it.
jgarnett: ArcSDE build hell and otherwise?
jgarnett: is garbriel and/or saul around for an update?
jgarnett: apparently not ... so I agree this will be a short meeting.
jgarnett: So lets just do the 0) what is up ... 1) svn cut off
jgarnett has changed the topic to: 0) what is up 1) svn cutoff warning
jgarnett: 0) what is up
desruisseaux: Only topic I could bring is that Andrea reported a failure in Resample2D. The fault is a change that I did recently (hunting for an IndexOutBoundException in MlibWarpPolynomialOpImage.computeTile) and will try to bring Resampler2D back to its previous state (or close to it) toonight.
acuster n=chatzill@pro75-1-82-67-202-232.fbx.proxad.net entered the room.
jgarnett: cool
jgarnett: jgarnett - arcsde single connection mode experiment; will end today one way or another.
jgarnett: anyone else doing anything ...
jgarnett: ... cool library must be done then.
jgarnett: 1) svn cut off
jgarnett: There is about two weeks before svn access will be turned off; so send your GeoTools contributor agreements in.
desruisseaux: Thanks for sending the emails Jody
jgarnett: (of if you have some kind exception situtation talk to a PMC member)
cliff n=chatzill@216.16.230.194 entered the room.
acuster: jgarnett: what does it mean that a few people cannot legally sign the doc?
***acuster thought we had structured the doc for those people as well
cliff left the room.
jgarnett: see the wiki page
jgarnett: at least one of the
jgarnett: we are welcome to do anything with those contributions etc...
acuster: yeah, but they should read the doc
acuster: it should work for them as well
jgarnett: okay we should ask them to do that; let me find the indivudual
jgarnett: Bryce Nordgen; and his employeer (US Forest Service) got back to me saying they cannot legally sign the document.
jgarnett: (ie a red x )
jgarnett: Do you have Mr. Nordgen's contact information?
acuster: yeah, when I get back to work I'll contact him
jgarnett: um...
acuster: he seems to be the only one\
jgarnett has changed the topic to: 0) what is up 1) svn cutoff warning 2) SoC
jgarnett: so when the end of the month comes; we will go over the svn access list
jgarnett: and comment out peoples names?
jgarnett: 2) SoC
jgarnett: simboss do you have some words for us about our hopeful SoC experience?
simboss: well besides creating the page
simboss: I have not done much so far
simboss: too busy
simboss:
jgarnett: I updated the timeline
jgarnett: Currently the "Proposed Ideas
simboss: thx about that
jgarnett: is all about last years projects....
jgarnett: ... so we need help; as far as I know students are going to be looking at this today.
jgarnett: (ie today was the deadline)
jgarnett:
simboss: what do you mean by
jgarnett: oh wait we get to find out now if osgeo was accepted; and start talking to students today....
simboss: proposed ideas are all about last year's project?
jgarnett: correct; those are all links to the projects we accepted last year are they not?
simboss: nope
jgarnett: (also on the #osgeo channel; osgeo is accepted)
simboss: those are ideas I just dropped there
simboss: (yeah I kjnow .-) )
jgarnett: doh; you are right ... I am stupid; I was confused because mentors were listed
jgarnett: (we don't do the mentor thing for a while; stends come up with the ideas after all...)
jgarnett: those are some interesting ideas!
simboss: well, feel free to update the page
simboss: they would be
simboss: the problem is finding people
simboss: it is going to be hard to find someone in western europe
simboss: for the moeny they give
simboss:
simboss: euro is too strong
jgarnett: lol
jgarnett: try for eastern europe then...
jgarnett: (simboss is complaining about being rich? must be nice...)
simboss: I am not rich
simboss: (not yet
jgarnett: (I am teasing....)
jgarnett: so everyone has a google id; they are going to add us to the list for osgeo
jgarnett: and then we can start reviewing student applications.
simboss: anyway, you are right we should really target eastern europe
jgarnett: Last time was harder; because the student submissions were in the google site (and not around for us to talk about w/ the community)
jgarnett: does that matter?
simboss: not so sure about it
jgarnett: Thanks for organizing last week simboss; it was needed!
jgarnett: That is probbaly it for the meeting?
simboss: I guess so
acuster: ciao all
vheurteaux: ciao acuster
jgarnett: I will post the logs
Agenda Items:
- What is up
- jaxb proposal
- default values
- svn cut off
Action items:
- waiting on votes for from jdeolive and ianT
- Svn access will be cut off at the end of march for anyone that has not signed a GeoTools Contribution Agreement
desruisseaux: Agenda topic: JAXB annotations on metadata - vote?
jgarnett has changed the topic to: 0) what is up 1) jaxb proposal .... x) svn cut off
Daniele n=chatzill@host23-197-dynamic.37-79-r.retail.telecomitalia.it entered the room.
Daniele left the room (quit: Client Quit).
simone n=chatzill@host23-197-dynamic.37-79-r.retail.telecomitalia.it entered the room.
aaime n=aaime@host212-40-dynamic.1-87-r.retail.telecomitalia.it entered the room.
desruisseaux: We have 2 agenda topic. Anyone else has other ones?
aaime: default values and validation?
Eclesia n=Administ@ACaen-157-1-114-235.w90-17.abo.wanadoo.fr entered the room.
jgarnett has changed the topic to: 0) what is up 1) jaxb proposal 2) default values .... x) svn cut off
desruisseaux: Are we ready to begin?
jgarnett: yes; Martin I am fielding some udig questions can you run the meeting today please?
desruisseaux: Will try (I'm not as good as you - don't even know how to change the topic!)
desruisseaux: Si 0) whats up
desruisseaux: On my side: martin) Looks like that I finally got a first version of ImageMosaicReader working. Performances seem good.
desruisseaux: (tried on Nasa's BlueMarble)
aaime: Nice. Is that working against postgrid, seagis?
aaime: I mean, is that something we can try out somehow?
desruisseaux: It is used for postgrid, but it is independant of it.
desruisseaux: (typo: used by)
Eclesia: Johann sorel : i found how to code widget so that they can be inserted in netbeans gui editor, most of the widget are now ready for that
desruisseaux: Yes, it is just a javax.imageio.ImageReader implementation.
groldan n=groldan@217.130.79.209 entered the room.
jgarnett: jgarnett - udig version hell
***aaime fighting functional testing against live data directories in geosever
simone: how do you store the tile index martin?
jgarnett: jgarnett - should be positive, intergrating some greate german translations for udig
simone: simone: doing non-geotools work
desruisseaux: Not ImageMosaicReader's business. This is user responsability to create a set of Tile objects. On my case I do that on Postgrid side. You can do that using a Shapefile if you wish.
desruisseaux: A Tile object provides the informations needed by MosaicImageReader.
***groldan doing 80-20 ArcSDE work (80% the time trying to get an instance to connect to, 20% working)
jgarnett: heh; you guys should grab an agenda item!
simone: jgarnett: weren't you answering udig questions
You are now known as repressed
simone:
desruisseaux: Can we move to agenda topic 1?
desruisseaux: I assume that the answer is yes...
desruisseaux: Proposal:
desruisseaux: Reminder: no new dependencies
desruisseaux: Only drawback I could see: would increase the size of metadata JAR.
jgarnett: only part missing is who does the tasks.
desruisseaux: Mostly Cédric
You are now known as jgarnett
desruisseaux: He have done almost everything
jgarnett: Specifically I am interested in who does the documentation tasks; something that has been a sticking point on the last several proposals.
desruisseaux: Vincent (vheurteaux), can we give this task to Cédric too?
desruisseaux: I assume that we just need indication about how to parse and format XML from a Metadata object?
vheurteaux: yep!
jgarnett: correct; you have the tasks already on the page - I just wanted to make sure a body was associated with the work.
desruisseaux: (actually I believe that Cédric already started some documentation draft)
desruisseaux: Well - Cédric everywhere.
jgarnett: (and it is not like they need to be long; just enough of a code example that users can start asking real questions on the mailing list)
jgarnett: what is his confluence id?
desruisseaux: (looking...)
desruisseaux: Seems to be cedricbr
vheurteaux: cedricbr
desruisseaux: So can we call for a vote?
jgarnett: okay with that accounted for I can vote +1
desruisseaux: Thanks
simone: +ò
simone: ops
desruisseaux: +1 on my side too of course
simone: +0
jgarnett: we have not managed to get a vote out of IanT for a while; perhaps via email.
jgarnett: aaime ping?
jgarnett: jdeolive ping?
aaime: Sorry
simone: I have a question though
desruisseaux: Yes?
simone: how this work compare to using hibernate or something like it
desruisseaux: Similar idea
simone: xmlbeans
simone: I mean, does it preclude usage of an alternate framework?
jgarnett: no it does not
simone: most people don't use JAXB
simone: at least afaik
desruisseaux: I can't be sure that I'm understanding right because JAXB is a new technology for me and I don't master it. But from what I have understood, I have the feeling that JAXB is like JDBC : as set of standard interfaces (actually standards annotations) allowing different vendors to plugin their own implementations.
pramsey n=pramsey@S01060014515fec41.gv.shawcable.net entered the room.
desruisseaux: Java 6 is bundled with an implementation, but if I'm understanding right we are not forced to use that implementation.
simone: jgarnett: say we would want to use hibernate, we would have to use xml fescriptors
jgarnett: simone my experience is mixed; a lot of people use jaxb on the "intranet" side of things; especially for SOAP/WSDL side of things. They just treat it as part of java and hope the xml stuff never has to be looked at.
simone: we could not use annotations, right?
jgarnett: simone you could use annotations; the annotations do not collide or anything (they are only metadata)
simone: k
simone: just curios...
aaime: anyways +1 for me
desruisseaux: Thanks
groldan: is there a iso19139 document made out of a Metadata object somewhere to have a look at it?
groldan: out of a MetadataImpl I mean
desruisseaux: Cédric have some. Do you want me to ask him to post it on the mailing list before vote?
jgarnett: I am hoping to see that as part of a test case / code example.
groldan: an attachment to the proposal may be?
groldan: and no, I'm not saying I want to see to beleave (before voting)
desruisseaux: No problem
I woudl have considered that as something perfectly normal and legitimate anyway.
desruisseaux: I will ask him tomorrow to post his examples as attachment to the proposal page.
groldan: not sure, may be like requiring the job to be complete beforehand
groldan: I'm just curious to see the product of it
groldan: and if the jaxb tech plays well with namespaces and prefixes and the like
desruisseaux: Actually the job is already mostly completed - we wanted to make sure that it was doable before to make this proposal.
groldan: yeah that's smart too
desruisseaux: I know that he have namespace - I can't said if they are all right since I'm not a XML specialist, but to a novice like me they looks like honest namespaces.
groldan: question: what do you do regarding InternationalString and Text elements?
desruisseaux: Cédric is working on it right now
(I means today - he will continue tomorrow)
groldan: I mean, is there a way to encode FreeText elements in more than one locale?
desruisseaux: Yes
desruisseaux: Since today
groldan: wow, cool
groldan: +1 vote here, community support
vheurteaux:
desruisseaux: He showed me a working example. He is now tracking a bug in unmarshalling of FreeText with more than one local.
groldan: you never had the feeling InternationalString needed a getLocales():Set<Locale> method?
desruisseaux: Yes
jgarnett: martin did you do the proper ISO thing for InternationalString? As I recall their was a general solution that could be applied to GetCapabilities documenets and the like. Declare the default langauge in the header; and use some kind of tag in the middle of the free text sections for each langauage.
desruisseaux: The problem is that Set<Local> is hard to implements on top of java.util.ResourceBundle.
groldan: yup
groldan: that's why I had to make my own InternationalString implementation a while ago, working with Hibernate
simboss_away n=chatzill@host23-197-dynamic.37-79-r.retail.telecomitalia.it entered the room.
desruisseaux: Jody - I'm not familiar with that. But we will look at it - peoples here are pretty sensitive to localization, so I guess that this topic will get attention.
groldan: sorry for the disruption, continue with the meeting
jgarnett: we can talk about it after the meeting
desruisseaux: One a related topic, Cédric will need a SVN write access in order to commit his work.
desruisseaux: He would do that very progressively, begining with only small bit in to let time for peoples to review if they wish (rather than big bunch of commits)
jgarnett: martin you will need to nominate him like normal; and review his work.
jgarnett: (as usual this is mostly a test to see if the developers guide has been read)
desruisseaux: All right
desruisseaux: Thanks for the vote on the metadata proposal. I'm done.
jgarnett: 2) default values
jgarnett: aaime you have the fllor
jgarnett: floor.
aaime: Ah, this is just to summarize my mails about default values and validatio of last week
aaime: since I got no answers to the last one
aaime: To sum up, forcing default values into non nullable fields is a behaviour we have in 2.4.x too
jdeolive left the room (quit: Read error: 110 (Connection timed out)).
aaime: and removing it would break some modules
aaime: in particular MIF
aaime: yet validation can be removed easily and will cause no damage
aaime: (besides one test that needs fixing)
aaime: I'm curious of one thing tought
aaime: all the information needed for validation is stored into Feature and Property
aaime: so why do we use an external utility class to make validation?
aaime: Wouldn't it make sense to have an isValid() method in both Property and Feature?
simboss_away is now known as simboss
simone left the room (quit: "ChatZilla 0.9.81 Firefox 2.0.0.12/2008020121").
aaime: hmmm... any reaction?
groldan: thinking...
groldan: I guess it would make sense, and also would make sense isValid delegates to the helper class
jgarnett: thinking ...
aaime: sleeping...
groldan: like to alleviate the task for different implementations..
jgarnett: I would like to remove validation; unless the user asks for it. The default value is available; so if a DataStore wants to make use of it when null is not an option than that is fine. It should probably issue a warning when substing in the default value?
jgarnett: You could add an isValid() method; we have done something similar in the past.
jgarnett: you have a trade off between making methods on the interfaces
jgarnett: (and having to write down the contract for them in javadocs so everyone gets it right)
jgarnett: or making methods as static utility functions; that just depend on the interfaces
jgarnett: so there is no chance of implementors screwing it up.
jgarnett: For this first cut
jgarnett: (ie 2.5)
jgarnett: I would like to keep the interfaces as small as possible
aaime: ah, now I get it, thanks for explainig
jgarnett: after we have some experience on the ground we can consider dragging some of the more populat methods into the mix.
jgarnett: (and when we do the javadocs will say what static utility function is called - ie a very strict definition; but still allowing for optimization - the one reason to place methods on a class api after all)
jgarnett: jgarnett: jdeolive mode off
jgarnett: aaime was that the discussion you needed? if so we really should move on ...
aaime: more or less
aaime: I mean, no one spoke their minds about default values
aaime: but I'm not forcing anyone to do so
aaime: we can go on
jgarnett: hrm; my mind was already spoke
jgarnett: 3) svn cut off
jgarnett: how does the end of the month sounds?
aaime: sounds good, what about snail mail issues?
jgarnett: ie everyone who has not sent in their paper work is shut out
aaime: you cut people that did not give you confirmation by mail right?
jgarnett: we can take peoples word that they have sent in the mail
jgarnett: at least for a few weeks...
aaime: (i.e., there is no guarantee that my mail will get there in time, I waited some packages from Amazon for over 2 months)
jgarnett: we don't need to be mean.
jgarnett: we just need to keep moving.
jgarnett: -
desruisseaux: I'm fine with end of March cutoff.
jgarnett: the list is going okay; we have 2 rejections on hand, and david adler is talking to IBM
aaime: 2 rejections?
jgarnett: both rejections allow us access via LGPL so it is not a stress.
aaime: should we be worried?
jgarnett: Byrce always had to reject; his work is in the public domain.
desruisseaux: Which part of the code is affected by the rejections?
jgarnett: David Zwiers has always been clear about not signing (c) away.
jgarnett: we can do an audit and see
jgarnett: audit tools:
jgarnett: -
jgarnett: -
jgarnett: we also got a few "huh?
jgarnett: messages from the likes of FrankW who cannot remember what they contributed.
aaime: if it's just an ant build file
aaime: it's not there anymore anyways
jgarnett: audit of david zwiers:
aaime: ok, can you explain me that lgpl thing?
aaime: since he's done tons of commits
jgarnett: lets use bryce as an example
desruisseaux: The license stay LGPL, but the copyright is not assigned to OSGEO for the code wrote by someone who rejected the assignment.
jgarnett: bryce releases work in the public domain; we can make use of that in our project and do anything with it - including making it available under the LGPL.
aaime: jgarnett, that case is clear
aaime: but I'm not sure about David's one
groldan: btw, anyone you know on the most active java developers list:
jgarnett: for david zwiers the work he did while at refractions is covered by the refractions signing the OSGeo document
jgarnett: it looks like he has done 6 commits since then.
groldan: go wolf go!
aaime: groldan, no, nobody
aaime: jgarnett, ah, good
jgarnett: yawn; I am pretty tired of this osgeo grind; suppose it had to be done regardless - and the problems were ours beforehand.
jgarnett: so turnning off the svn tap at the end of the month is good
groldan: yup
aaime: sure
jgarnett: after that we can sit down and update the headers / aka providence review part 2
jgarnett: okay thanks for the meeting everyone - and happy hacking.
desruisseaux: Thanks for driving the meeting Jody.
jgarnett: heh; tanks for getting us out of the driveway.
groldan: and share some kudos
jgarnett: (doh; I am full of type mumbles today)
jgarnett: I will post the logs.
Summary:
0) what is up
1) magic wfs xml dependency
2) jira use
3) Some considerations on the process proposal
jgarnett: 0) what is up
jgarnett: jody - porting fixes to trunk, doing too much, etc...
jsorel: johann sorel : cleaning up map2d widget, uml can be found :
***aaime looking around for 1.6.2 bug fixes, starting to consider extended styling for markers to get on par with mapserver, etc
aaime: (btw, looking here: cannot find the process proposal?)
aaime: any direct link to it?
jgarnett: aaime I had a good conversation with a uDig user about extenting markers; we have an interface / designed worked out if you want.
jsorel:
aaime: I'd be interested in looking at if for sure
jgarnett: I will join Eclesia in marking it down as "RnD"; I am hoping to be paid for the work since it would be for MIL2525B symbols.
aaime: jgarnett, I cannot pay you for that
jgarnett: You do know that we allready extend the formal style interfaces; and you can follow suite if you want to do more; ie add extra fields to the style implementation in GeoTools to control additional parameters.
aaime: If money is involved I'll just go on by myself, this is not something endorsed by TOPP
jgarnett: no I was hoping to get paid; but if you do the work first that is fine ... I have lots to do.
jgarnett: I just want the work done.
aaime: If you have a design to point me at nice, otherwise I'll try to cook up one by myself
aaime: but yeah, the idea was just to extend on the well known markers and make the name something that can be used to look into a pluggable symbol system
jgarnett: yeah I have a design; let me cut and paste for emails.
aaime: cool
jgarnett: yeah that is the design
jgarnett: we sorted out the interface however.
jgarnett: so you have somewhere to start.
aaime: Ok, we'll talk about it in another meeting
jgarnett: heh; you run the meeting and I will cut and paste the content.
aaime: Ok
aaime: Anything else for the what's up part?
jgarnett: well gabriel is mostly what is up
jgarnett: by I understand he is between a workshop and his hotel.
jgarnett è ora conosciuto come gabrie1
aaime: Eh yeah, he's at the OpenStreetMap mapping party in Girona
gabrie1: I am between my workshop and hotel; I have been working on making DataStore Java 5 happy; and am bogging down with DataStoreFactory.Param
gabrie1 è ora conosciuto come jgarnett
aaime: sigh...
aaime: Ok, let's move to next topic
aaime: 1) magic wfs xml dependency
jgarnett: so yeah; Eclesia's Process proposal also used Param; so we just need to make it a first class object.
jgarnett: um that is me
aaime: this is for jgarnette and jdeolive
jgarnett: but reall jdeolive
aaime: that's not here
jgarnett: basically what the heck; it took me for ever to build on friday.
jgarnett: do we know the solution?
jgarnett: okay everyone is away ...
jgarnett: moving on ...
aaime: I'm here
jgarnett: true
aaime: the solution is to either
aaime: - make sure those jars are deployed (are they not?)
jgarnett: they are
aaime: - put them in gt2, since that's their place anyways
aaime: jgarnett, so why weren't you able to build?
sfarber ha abbandonato la stanza (quit: Read error: 110 (Connection timed out)).
jgarnett: but they are not fetched by mvn; you need to remove them from your local repo first.
jgarnett: even -U does not do it
aaime: Ah, because someone keeps on updating them
aaime: but does not change the version number
jgarnett: without changing version number
jgarnett: (yeah)
aaime: sigh
jgarnett: so if they are real; treat them as real.
aaime: gabrieeeel???
jgarnett: no idea?
aaime: No, the real solution is to move them into gt2
aaime: the so so solution is to remind people updating them
aaime: to redeploy them upping the version number
aaime: that's why I was calling Gabriel
jgarnett: understood.
aaime: he's the one hacking on them nowadays (afaik)
jgarnett: okay so I feel like we need a breakout IRC with gabriel ;-P
jgarnett: next?
aaime: sure
aaime: with gabriel and jdeolive,yeah
jgarnett: 2) jira use
jgarnett: basically Jira use concerning "Fixed for"
jgarnett: is not being done correctly.
jgarnett: so a single bug is showing up in the release notes of most releases.
aaime: I don't understand why all those issues where tagged for a million releases
jgarnett: this was really bad for the 2.4.1 release.
jgarnett: I think it is just a user error.
aaime: may be... (not convinced)
aaime: in geoserver we usually tag an issue for both the stable and the current trunk
aaime: but not more than that
aaime: tagging for two subsequent releases on the same branch makes no sense to me
jgarnett: I agree
jgarnett: I think we may be done ... can we do a search and catch these
jgarnett: and fix them in builk
simboss: (hy guys, sorry mission impossible III got my attention
)
simboss: (hy guys, sorry mission impossible III got my attention
)
aaime: can you repeat that? I did not get it the first and second time
simboss: (hi guys, sorry mission impossible III got my attention
)
simboss:
aaime: Aaahh, thank you, now I got it
aaime: simboss, did you have feedback on the process proposal?
aaime:
aaime: it looks saner than the last time I looked at it
jgarnett: yep; I reviewed.
aaime: thought using generics in all those maps
aaime: would make it clearer
simboss: it's changed since last time I look at it
simboss: do we have
jgarnett: some stuff like the "Parameter" need to be done anyways; and we need to write it up using the "Proposal" template so we can record vote.
simboss: status for the process?
jgarnett: better
jgarnett: we have Eclesia here
simboss: like in WPS?
jgarnett: not sure exactly what you mean simboss?
simboss: I meant to say a way to ask a long running process
simboss: for its status?
jgarnett: a ProcessListener has some of that
simboss: you spawwn a process
jgarnett: sorry "ProgressListener"
aaime: ProgressListener maybe? too bad it's not included in the proposal
jgarnett: it is.
simboss: but then you want to be able to ask the status
jgarnett: public Object process(ProgressListener monitor);
aaime: the interface definition is not
aaime: (the listener and its event object)
simboss: I would like to se the progress listener interface define
aaime: ah, it's because it's in gt2 since 2.0
jgarnett: right; I think it is actually just a callback - not sure it has much of events.
simboss: defined
simboss: is there a way to explicitly stop a running process?
aaime: Hum, parallel proposal? yeah, the Prodgress
aaime: sigh sorry
jgarnett: note it is mostly concerned with reporting progress; and cancelling. It does have some support for exceptions
aaime:
aaime: ah no, it's in the docs...
aaime: can't remember when we talked about this api...
jgarnett: yeah docs!
jgarnett: I talked about it recently as I was adding it to JTS
jgarnett: (in simplified form)
jgarnett: in anycase the "Process Proposal" looks to be close; needs to be written up as a formal page for vote?
simboss: I would not mind having a way to uniquely identify a process
sfarber [n=sfarber@88-149-177-23.static.ngi.it] è entrato nella stanza.
sfarber: Did I miss the meeting?
aaime: more or less
jgarnett: the organized part of it anyways
jgarnett ha scelto come argomento: 0) what is up 1) magic wfs xml dependency 2) jira use 3) free for all
aaime: jgarnett, yeah, an ID would be nice
aaime: to allow server like apps to have a way for clients to refer back to a long running process
sfarber: sorry!
simboss: exact aaime
simboss: I would like to have a fully fledged status object
simboss: that can be requested
simboss: providing an ID
simboss: for a running process
aaime: wondering if this can be achieved using a sub-interface specific for GeoServer
simboss: I do not want to listen for a process
aaime: since what simboss is referring to
simboss: I want to ask the status when I want it
simboss: of course
aaime: is a composite process (which may be made of a single element eventually)
simboss: I am just depicting a use case
simboss: (yeah, could be )
aaime: what I mean simboss
simboss: (aggregation actually, to be picky
)
aaime: is that you want an ID for the chain
aaime: but probably not for the single element in it
simboss: mmmhh
simboss: the way I see it
simboss: there is not really a difference
aaime: unless you use it to name an input or an output of the element
simboss: a process
simboss: can itself be a chain or graph or processes
simboss: but still
simboss: each single piece
simboss: must have an id
simboss: so that I can track the status
aaime: I think the issue is deciding on a minimal API vs one that can be everything to everyone
aaime: what I mean is, we need the ID for server side processes all right
simboss: of course
aaime: what about people runnig batches, what are their needs
simboss: I do that a lot
simboss: and what I did
aaime: are we sure that we won't end up with a fat interface?
simboss: was being able to do batches in paralles as needed
aaime: (I'm just making questions, not trying to imply a solution)
simboss: (np problem, brainstorming is godd)
simboss: all I would like to see
simboss: for a process
jgarnett: simboss we are after something simple here
jgarnett: ie
simboss: would be an ID
simboss: and a status
simboss: nothing more
jgarnett: the ability for Java programmers to hack to gether something
jgarnett: managing with an ID and all that
jgarnett: should probably be wrapped around this
aaime: yeah, I was thinking along the same lines
jgarnett: we have tried and failed several times now
aaime: using the listener
jsorel: If i can say something : I see a process like the lowest level. this interface couls be extended in the futur for web services or others...
aaime: you can make up a status object in the wrapper
jgarnett: just because we tend to get complicated; in several directions at once.
simboss: I see your point aaime
aaime: simboss, wrapping those Process objects into a GeoServerProcess would be possible
simboss: I am happy with just having my suggestions captured
aaime: and leave the Process interface bare bones
simboss: I agree that we can create the status using a progress listener
aaime: the id was well can be managed by the wrapper only
aaime: yet
aaime: I have one thing in mind that might require having it in the Process interface
sfarber ha abbandonato la stanza.
aaime: jgarnett, picture the situation where you have a chain of processes
simboss:
aaime: the output of the "whole" is not only the output of the leaves in the processing chain
aaime: but also something in the "middle"
aaime: how do you identify it?
aaime: identifying the output goes thru identifiying the process that generated it, no?
aaime: Of course we could do double wrapping
simboss: well aaime
aaime: to attach id concept to both the single elements and the chain
simboss: I can tell you how people do this with WPS
aaime: but it does not look that good
simboss: chaining WPSs
simboss: there is really no way to do what you want
simboss: what you want is orhestration
simboss: not a process
jgarnett: sorry guys was not paying attention
simboss: if you wrap a chain as a process
simboss: usually that means
jgarnett: but once again; if we get complicated we don't get anything.
simboss: that you do not care about the itnermediate results
simboss: (sure jgarnett, just fleshing out some concepts
)
jgarnett: how I would handle it is to feed a progress listener into each step of the chain; but that is me..
jgarnett: okay sure.
aaime: jgarnett, how does that give you identity?
aaime: Ah, another thing
simboss: I am probably to biased towards services
aaime: I understand that returning Object allows you to return anything
jgarnett: we have Objects for identity
aaime: but to make the Processes
jgarnett: (ie at this level)
aaime: chainable
aaime: having a Map of outputs would be probably better
jgarnett: nothing stops you from doing a Map of outputs
aaime: (that's what I did in my process model in by thesis)
jgarnett: espeically with type narrowing.
aaime: jgarnett, you don't understand
aaime: if you want to make and editor
aaime: the concept of multiple recognizable output is better to be found in the api itself
aaime: something you can rely onto
jgarnett: okay; perhaps I should not take part in this conversation if I cannot pay full attention
jgarnett: yeah okay; I do get that part
aaime: otherwise one process outputs a single FeatureCollection
jgarnett: hense the cut of IOp in uDig
aaime: another an array of whatnot
aaime: another a List
jgarnett: makes sure to produce both the metadata stream; ie so you can wire it
aaime: and so on
aaime: amess
jgarnett: and the data stream
aaime: why not mimic the input part?
jsorel: public Class resultClass(Map parameters) throws IllegalArgumentException; <--- gives you the output type
aaime: make a sort of Parameter map in output
simboss: guys have you looked at WPS?
jgarnett: indeed; but then we are into BPEL goodness
simboss: (stupid question
)
aaime: jsorel, how do you make an editor that allows you to build a chain of spatial analiss
aaime: based solely on that API
aaime: if you have a Map with description like the one you have in input
aaime: you have all you need to build an editor
aaime: it does not look like a massive complication in the API to me?
jsorel: you can't know the output before the process happen. process can do everything and nothing
TheSteve0 [n=scitronp@66.151.148.225] è entrato nella stanza.
aaime: yes you can
jsorel: you can"t predict what will be the output
aaime: you cannot know the actual output
aaime: but you can know in advance
aaime: you'll get a FeatureCollection and 3 integerrs
aaime: you know the structure in advance
aaime: if you do a dem analisys you can get out
aaime: ridge lines and min/max points
aaime: you know in advance you'll get a feature collectin of points
aaime: and one of lines
jsorel: imagine a process that splits a featurecollection depending on different parameters.
simboss:
jgarnett: I got to head out guys; I will catch up on the logs...
simboss: this is what 52 north does for wps
jsorel: it can result in 1 or N outputs
aaime: then given the parameters you know what the output are
simboss: check it out quikly guys
simboss: it is the interface for an algorithm to become a WPS process
simboss: Map run(Map layers, Map parameters);
aaime: looks a bit saner, thought I dont' see the need to have separate layers and parameter maps
simboss: neither do I
aaime: a parameter could be something that you input statically in one process, and something you get from another process
simboss:
aaime: in another case
simboss: it was just to show map for inputs and outputs
aaime: jsorel, in the case of a process that does what you proposed above
aaime: you either have the machinery to make the process say how the output will look like given a certain set of inputs
aaime: or you cannot model, describe it
aaime: I think you cannot make a WPS at all out of an API that returns an Object
aaime ha scelto come argomento: 0) what is up 1) magic wfs xml dependency 2) jira use 3) Some considerations on the process proposal
aaime: the api could be modified to be
simboss: (you need something that is web accessible)
aaime: public Map<Output> resultClass(Map<Parameter> input)
aaime: where Output looks a lot like Parameter, without sample and required
aaime: does this make sense?
aaime: (well, Map<Output> resultMap(Map<Parameter> input))
ticheler [n=ticheler@host240-197-dynamic.49-82-r.retail.telecomitalia.it] è entrato nella stanza.
jsorel: yes, but i think in some cases it's really going to be hard to predict
simboss: I am quite happy with that
aaime: Hum, I did a modeller with some 50 operators and did not have any of those cases
aaime: but in fact, those 50 operators were really even simpler
aaime: the output configuration was static, no need to know the inputs
aaime: they were the most common ones found in a gis system
aaime: (both raster and vector)
aaime: this would be more flexible
simboss: (go aaime go, go!)
simboss: absolutely
aaime: anyways I think we could have a deal breaker
simboss: we are not trying to make you waste time jsorel
aaime: if the process is really so dynamic na dhard?
aaime: sorry
aaime: if the process is really that dynamic
aaime: it may well return null to state "I don't know"
simboss: (this discussion is pretty interesting)
aaime: it won't be possible to use it in a modeller
cedricmc [n=chatzill@ble59-6-82-240-110-86.fbx.proxad.net] è entrato nella stanza.
aaime: but it would give you full flexibility
aaime: what do you think?
jsorel: the null could be a solution, this way i agree
aaime: Cool
aaime: Out of curiousity, how do you plan to handle such a very generic process in Puzzle?
aaime: you don't know anything about it, so you don't know what to make of the result?
jsorel: dont worry for that
each process will have a special "widgettool"
aaime: A process API is really about extensibility and pluggability, but if you don't have handles
jgarnett: aaime:
aaime: ah; i see, so you'll put the knowledge about the results in the UI
jgarnett: I will email you and Thuns privately since he may be available for testing
jsorel: yes
aaime: jgarnett, you're running too much. I'm thinking about symbols privately
aaime: It may be that I'll have something next year
aaime: or next month
aaime: depending on a lot of other priorities
aaime: jsorel, how is that any different than having the process tell you about the results?
aaime: the UI will have to know anyways, some way or the other
simboss: jgarnett:
simboss: I actually did refactor some of the streaming renderer to handle such symbols
simboss: as well as APP6
simboss: from svg definitions
jsorel: aaime: i wont do UI for all process...
jsorel: only for the simple ones
simboss: jgarnett: what are you doing exaclty?
jgarnett: answering questions on the user list actually
jgarnett: we get one of these questions a month
jgarnett: a fellow callued Theuns thought he may of been able to pay to get the work done
jgarnett: and then andrea asked about it again today.
aaime: jsorel, so you would be ok to update the proposal in order to have Maps as the output, and output descriptions by extending the concept proposed ProcessFactory.resultClass(...) method?
jgarnett: so I wrote up what was needed.
simboss: I might force the nato guys
simboss: to release part of the work
jsorel: I'm updating the page
simboss: let's say ask
jgarnett: okay "ask"
simboss: so that we can work on it and improve it
jgarnett: this Theuns guy had the MIL2525B set done up as a font; but he was not going to be able to release it to us as an unsupported module.
jgarnett: I last saw it as SVG myself...
jgarnett: andrea; what were you needing this work for?
jgarnett: (or just to catch up to mapserver?)
aaime: I have no needs, just interest
aaime: yes, I have seen people turning down geoserver in favour of Mapserver
aaime: just because we don't support extensible symbol sets
jgarnett: for me this is all about recovering some work that was "lost" in OWS-3
aaime: like diagonal line fills for example
jgarnett: I am sick of answering the email every month
aaime: but anyways, as I said, there is no business plan behind this
aaime: I just find it fun
jgarnett: no worries; I have some more content (ie the SLD examples) that I can add to that page.
aaime: which means it has to fight for my scarce free time
simboss: let's try to make it business then
jgarnett: but you understand da plan.
aaime: simboss, sure, find me anyone willing to pay for that
simboss: I might
simboss:
aaime: I'm interested, let me know when you find someone then
jgarnett: Thuens might as well; the reason I was talking to him.
jsorel: done, update the wiki page
ticheler ha abbandonato la stanza (quit: Read error: 104 (Connection reset by peer)).
jgarnett: but I want it done more than I want additional work.
aaime: jsorel, looks good to me
jsorel: jody simboss ?
aaime: small one: public boolean isValid(Map parameters); -> public boolean isValid(Map<Parameter, Object> parameters);
jgarnett: thinking
jgarnett: interesting
jgarnett: avoids the use of a Key String
jgarnett: hard ...
aaime: eh?
jsorel: (Map<String, Object> you mean
jgarnett: I usually go for Map<String,Serializable>
jgarnett: so you can save the bugger.
jgarnett: but you end up having to know about your data; and look it up by id or something ....
aaime: ah, String would be as good, yes
jgarnett: Parameter.key exists does it not?
jgarnett: aaime; I was thinking about your "output" format request... let me be wild here:
desruisseaux ha abbandonato la stanza (quit: "ChatZilla 0.9.81 [Firefox 2.0.0.12/2008021313]").
jgarnett: Map<String,Object> expectedOutput( Map<String,Object> params );
jgarnett: would be an exception if the params are not valid
jgarnett: would be a lookup of String-->metadata otherwise
jgarnett: ie
***jsorel updated the page
jgarnett: String > FeatureType
jgarnett: String > Class
jgarnett: (just throwing out 2 cents...)
aaime: jgarnett, I lost you
aaime: this is not making sense to me
jsorel: jgarnett: the expectedOutput doesnt return a map
jsorel: it's an outputparameter arry
jgarnett: jsorel array / map is the same kind of thing
jgarnett: a data structure to hold the results
jgarnett: in one you use a number to lookup
jgarnett: in the other a key
CIA-23: vitalus 1.1.x * r29527 udig/udig/plugins/net.refractions.udig.catalog/src/net/refractions/udig/catalog/ServiceParameterPersister.java: restoring of the catalog is fixed
jgarnett: makes no difference to me...
jgarnett: sorry Eclesia; just looked at "resultClass" now
ticheler [n=ticheler@host240-197-dynamic.49-82-r.retail.telecomitalia.it] è entrato nella stanza.
jgarnett: (always catching up)
jgarnett: I fear you will need two "type" fields
jgarnett: I am thinking when the result is FeatureCollection
jgarnett: you would also like to know the FeatureType of the contents.
jgarnett: (for OutputParameter)
aaime: true
jgarnett: I would call one "type"
jgarnett: and the other "metadata"
jgarnett: but perhaps that is stupid?
aaime: no, in fact that would be needed as well
aaime: some process might be working only if the feature type has the main geometry of type "point"
jgarnett: usualy waste of time feedback: String -> InternationalString when used for display
jgarnett: (also match the Param interface from DataStoreFactorySPI - since gabriel needs it anyways)
aaime: with the current api you'd know you wired them properly only when running them
jsorel: sigh we could provide a sample object, this way we dont have a second type
jgarnett: your guidence on this one is better than mine aaime; I have only played with working BPEL systems
jgarnett: and at the coding level they were often horrible
jgarnett: make sample optional
aaime: well, mine was made in VB6, how good looking do you think it was?
jgarnett: depends
jgarnett: did you comment it?
jgarnett: or was it a mess of magic maps?
aaime: nope, everything statically typed
jgarnett: (which was my complain with the systen I saw)
aaime: it was quite constrained
jgarnett: interesting
jgarnett: aside:
aaime: specific to GIS
jgarnett: back in the datastore days
aaime: could not do anything else
jgarnett: Param had an alternative idea
jgarnett: useing a Java bean
jgarnett: what do you guys think about using that? rather than a Map
jgarnett: statically typed; JavaBean reflection provides everything else you need for wiring.
jsorel: reflection...
aaime: hum, there's that shadow class you can couple with a JavaBean to actually provide description
aaime: but Param is quite a bit easier to use
jgarnett: you ask for your getParameter() bean
jgarnett: fill in the blanks
aaime: that part of the javabean api (descriptions, constraints)
jgarnett: and process
aaime: is not well known
jgarnett: really?
jgarnett: I always used it when making AWT apps
jgarnett: it came out before reflection.
aaime: never used it and I made tons of swing apps
jgarnett: swing had models
jgarnett: beans were not needed as much
***jsorel never used it
jgarnett: okay so let me put this in other terms
jgarnett: could we use a strongly typed object for the parameter
aaime: I'd say, let's stay away from it... it's almost dead, thought in fact part of the java runtime
jgarnett: and the result
jgarnett: (ignore the bean part; just the object part)
jgarnett: here is my reasoning ...
aaime: jgarnett, with reflection only how do you get a "title" "description" and whatnot?
jgarnett: it would be strongly typed; harder to screw up
jgarnett: code examples would be easier to understand
afabiani [n=chatzill@host-84-220-176-135.cust-adsl.tiscali.it] è entrato nella stanza.
jgarnett: and it is still available for dynamic goodness via reflection
jgarnett: BeanInfo (it is true)
jgarnett: PropertyDescriptor
aaime: right, BeanInfos are basically dead, that was my point
jgarnett: and so on ...
jgarnett: okay so we thought about it ...
aaime: it would make the process api harder to use
jgarnett: I just am horrified to see my simple Param hack stay allive so long; suppose it is doing that because it is simple
aaime: indeed
jgarnett: okay I will let it live
aaime: java made it too hard to attach extra stuff to properties
aaime: hmmm... annotations maybe?
jgarnett: speaking of ...
aaime: that would make javabeans more palatable
jgarnett: yeah you beat me to it
aaime: jsorel, what do you think?
cedricmc ha abbandonato la stanza (quit: "ChatZilla 0.9.81 [Firefox 2.0.0.12/2008020121]").
CIA-23: desruisseaux * r29528 geotools/gt/modules/unsupported/coverageio/src/ (7 files in 2 dirs): Deeper investigation of TreeNode layout and debugging.
aaime: have the parameters be a single java bean
aaime: with annotations on the getters to provide title, description, required and whatnot?
jsorel: i seen a bean constraints once... and i stopped after seeing it
jgarnett: aside: there is one downcheck; internationalization - if you change Parameter description to an InternationalString I have no answer for it it annotation land
jgarnett: we are thinking object not bean
aaime: jsorel, ever played with hiberante?
jgarnett: let me try your example ...
jsorel: update the page and add another solution... i'm getting lost in your talks
aaime: sorry
***jsorel wan't an exemple to understand
aaime: an example of a class annotated to make it hibernate ready:
aaime: it's just a javabean
aaime: with some annotations to specify the extra behaviour
jgarnett: class ClipProcess implements Process<Clip> {
jgarnett: class Clip
jgarnett: FeatureCollection process( Clip clip, ProgressListener );
jgarnett: }
jgarnett: you would put a @description on "content" and "bounds" above.
aaime: hem, Jody, the output would be another bean
jgarnett: yeah yeah;
jgarnett: do it that way for the wiki.
jgarnett: right now I just want to see how horrified Eclesia is at the idea.
***jsorel is voting -50 for now, and that won't change will i dont understant the stuff
jgarnett: honestly I would find this way easier to understand as a java programmer; but it would be harder to hook up to a ui
aaime: 50x horrified it seems
jgarnett: I am going to grab food
aaime: jgarnett, since he's planning to make up a custom UI for each process, not any bit harder
jgarnett: Eclesia I am happy with the proposal as shown; I would recommend InternationalString and a single Parameter class w/ metadata field.
jgarnett: the pure Java solution seems a bit much
aaime: jsorel, I think having beans woudl make the code look nicer to use a play with
aaime: but the original proposal is miles better than what we have now (nothing)
jgarnett: yeah I noticed that about him; dynamic user interface is the way to go hense the getIcon
sutff
aaime: jgarnett, there is no way you can make a decent you dinamically
aaime: each process needs a custom crafted UI if you want it to be usable
jgarnett: but you can make an indecent ui; see geoserver
jgarnett: your point is taken
aaime: processes are a lot harder than just a few connections params
jgarnett: um; some of the BPEL stuff is not bad; but you are correct there is usually a custom ui to configure the important nodes.
aaime: but I agree current GeoServer datastore UI is horrible
aaime: no, make that "the current GeoServer UI is horrible" and let's call it a day
jgarnett: I am going to go grab food; can someone post the logs...
aaime: jsorel, can you pinpoint what makes you feel so bad about the javabean + annotation proposal?
aaime: I just want to see if it's just some lack of explainations
aaime: or something more deep
jsorel: I can't cote +1 for something i never used before
jsorel: that's why i want a clean exemple
jsorel: vote*
jgarnett: understood
jgarnett: (aside: andrea does getHorizontalCRS do what we need for the 3D work? Is it really just a matter of hooking it up .... everywhere?)
aaime: I could make an example, but in fact I never made an annotatin processor so I'm uncertain about the implementation
aaime: jgarnett, more or less.. that's where I'm stuck now at least
jgarnett: andrea I think there are examples for generating bean info already
aaime: if anythying will pop up after that, I don't now
aaime: googling for "beaninfo annotations" only provides list of people asking for the annotations
aaime: no one providing them
jgarnett:
jgarnett: has something
jgarnett: cannot believe it is not part of Java 5
aaime: this makes you up to speed with what Sun thinks of the BeanInfo future
aaime:
jgarnett: well I can see some examples of @Property
jgarnett: so I am a bit confused
aaime: jgarnett, not intersted in random examples. Is there a maintained library anywhere or not? That's the question
aaime: otherwise we'll end up killing this proposal like the others, just by stretchig too much in another direction we did not try before
jgarnett: I agree
jgarnett: @annotations are good
jgarnett: no need for bean info
jgarnett: that is why nobody cares
jgarnett: (darn you guys and your facinating conversations....)
CIA-23: desruisseaux * r29529 geotools/gt/modules/unsupported/coverageio/src/main/java/org/geotools/image/io/mosaic/TreeNode.java: Removed accidental debugging code.
aaime: lol
aaime: well, I'll think about this and see if I can make up anything
aaime: but
aaime: I don't want jsorel to loose steam
jgarnett: I agree
aaime: jsorel, are you sick of waiting on this proposal?
***jsorel already lost some
jgarnett: lets say one code example; and Eclesia can say if it is a good idea or not.
jgarnett: or do we skip it and go with what is there now?
aaime: jgarnett, I have no time to make one now
jgarnett: I will complete the one above then
jgarnett: (sigh!)
jsorel: one thing you must also think, there are already plenty of "process like" classes in udig
jsorel: so if it too different...
jgarnett: that is fine Eclesia; it is more important to be clear
aaime: jsorel, since you're eager to get you hands dirty
aaime: have you tried the current api in a real example?
jsorel: dont worry for me, i have time for this process thing
aaime: I usually find that it provides good insight on the weaknesses of a proposal
jsorel: with the current proposal i know i can use it
aaime: I'm pushing a little because I had an experience with Hibernate
aaime: and using beans and annotations was a pleasure
jsorel: make an exemple(not know if you dont have time, but in the week)
aaime: ok, I'll try
aaime: if I don't manage to, I'll vote +1 for the proposal as is
aaime: next meeting we vote?
jsorel: i dont have anything against annotations or bean, that's just i dont know them
jgarnett: hi
jgarnett: updated page
jgarnett: - single Parameter class
jgarnett: - metadata field
jgarnett: - International String
jgarnett: also "getResultInfo" to match getParametersInfo()
jsorel: getInputInfo() and getOutputInfo() ? just a suggestion
jsorel: no big deal anyway, leave it
jgarnett: yeah go for it
jgarnett: added a comment w/ the code example
jgarnett: a bit weak; did not do the factory part.
jgarnett: but hopefully shows what is needed?
|
http://docs.codehaus.org/pages/viewrecentblogposts.action?key=GEOTOOLS¤tPage=3
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
16 October 2012 04:15 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
The Chinese producer restarted the 25,000 tonne/year NBR line to produce 3355 NBR, a major rubber grade, the source added.
The 50,000 tonne/year NBR plant was shut on 20 July for regular maintenance. However, the plant’s restart date was delayed, despite the completion of the turnaround, because of weak demand in the market, the source said.
The producer offered the first batch of on-spec NBR from its 50,000 tonne/year NBR plant in June 2011 when it was newly started up. However, the plant’s operating rate was kept largely at around 50% capacity since.
Ningbo Shunze Rubber has increased its NBR offers by CNY1,200/tonne to CNY22,000-22,200/tonne
Demand in the Chinese market has been improving, according to
|
http://www.icis.com/Articles/2012/10/16/9604175/chinas-ningbo-shunze-rubber-restarts-one-nbr-line-on-15.html
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
11 June 2013 22:18 [Source: ICIS news]
CAMPINAS, Brazil (ICIS)--Brazil state-run oil producer Petrobras has delayed the start-up of its Abreu e Lima refinery to November of this year, the company said on Wednesday.
The first train of the refinery, which is located in the state of Pernambuco, should be complete by November and the second by May 2015, the company said.
The refinery originally was scheduled for start-up in June of this year.
"A lot of factors contribute for variations in the deadline [of the project] such as technical contingencies, strikes, urban and environmental conditionings, bad weather, among other [things]," Petrobras said in a statement.
Petrobras and Petroleos de Venezuela (PDVSA) had talked about partnering on the Abreu e ?xml:namespace>
PDVSA is to have a 40% stake in the project. However, PDVSA has been unable to secure a $10bn (€7.5bn) loan to pay for the stake.
Petrobras said it could develop the refinery alone if PDVSA leaves the
|
http://www.icis.com/Articles/2013/06/11/9677635/brazilian-refinerys-start-up-delayed-until-november.html
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
Aros/Developer/Docs/Libraries/BSDsocket
Contents
Introduction[edit]
Amiga had a few attempts at internet access (TCP) and address (IP) stacks. AmiTCP (BSD 4.3) to Miami ( ) to Roadshow (OS4 and possibly AmigaOS (TM)).
A few of these functions have macros for easier use (select, inet_ntoa, etc.), but you must use CloseSocket and IoctlSocket in place of close and ioctl. Bsdsocket.library has it's own ernno, which can be queried with Errno() or you can make it use the errno variable using SetErrnoPtr() or SocketBaseTagList()
Make sure the bsdsocket.library is opened before you call the bsd functions, if not than its very normal that is where it crashes...
net_udp.c contains...
#include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <netdb.h> #include <sys/param.h> #include <sys/ioctl.h> #include <errno.h> #include <proto/socket.h> #include <sys/socket.h> #include <bsdsocket/socketbasetags.h> #include <net/if.h> #include <netinet/in.h> #include <arpa/inet.h> #include <netdb.h>
#ifdef __AROS__ #include proto/bsdsocket.h #endif #ifdef NeXT #include libc.h #endif
and uncomment them
//extern int gethostname (char *, int); //extern int close (int);
For example you create a socket with socket() (from bsdsocket.library) and then you pass it to write() from arosc, but this integer value means something different to arosc. Same with other way around, using fd created by arosc with bsdsocket.library.
Do you know that bsdsocket API has support for fd hooks. They allow to sync up fd numbers between bsdsocket.library and libc. If you look at original netlib code, you'll see it. That implementation is based on some support from SAS/C libc.
); }
- Under AROS you need -larossupport instead of -ldebug.
- Of course you need -lsdl since you are using SDL :)
- make sure ioctl or ioctlsocket is defined to IoctlSocket, and close to CloseSocket.
Reference[edit]
Every bsdsocket.library call takes SocketBase as an argument, and since you declared a global NULL SocketBase socket functions will fail. To solve the problem, you can include the header blow instead of proto/bsdsocket.h, and use SocketBase as normal, which will become the taks's user defined field. Please note that you have to do call init_arossock() in the right thread, preferably in a wrapper around exec_list, and don't declare a global SocketBase variable.
As far as i know WaitSelect() works just like select() and would be nice to paste bit of the code here where it fails... WaitSelect can only be used on sockets, not on DOS filehandles. On unix they are handled uniformly, but on Amiga you have to use different code path for file handles and sockets.
sys/select.h --------------- ; } }
One of the arguments of LockMutex is still NULL, it could be ThreadBase too. One thing I noticed is that you don't use pthread_join/WaitThread in your code. Instead you do busy loops until the given thread sets its state. Because one of the functions which overflows the stack is wait_thread_running().
LockMutex fails in exe_elist(), because cclist_mutex is not itialized for AROS in main.c, hence the NULL pointer.
Initializing cclist_mutex solves the problem. Here are the two thread-waiting functions without the busy loop:
void wait_thread_running (S4 threadnum) { LockMutex (pthreads_mutex); if (pthreads[threadnum].thread) WaitThread(pthreads[threadnum].thread, NULL); UnlockMutex (pthreads_mutex); } void join_threads (S4 threadnum) { /* wait until all threads are stopped */ S4 i; LockMutex (pthreads_mutex); /* thread zero is the main thread, don't wait for it!!! */ for (i = 1; i < MAXPTHREADS; i++) { /* don't check the calling thread!!! */ if (i != threadnum && pthreads[i].thread) { WaitThread(pthreads[i].thread, NULL); } } UnlockMutex (pthreads_mutex); }
Replacing WaitThread with pthread_join would work on other platforms. Checking (pthreads[i].state == PTHREAD_RUNNING) isn't necessary anymore, since these thread waiting functions instantly return on terminated threads.
It's waiting till the thread is started (was created). It's the thread_sync function in the examples. This is for synchronizing threads. The next step would be sendig data to the thread: with threadpush ().
Also make sure that to open bsdsocket.library in the same thread where you do the socket operations, because you can't share socket descriptors between threads. Technically it's possible to pass sds between threads, but it requires some extra code, and generally not worth the hassle if you can move all the socket manipulation into one thread.
Examples[edit]
/* Copyright (C) 2008-2009/socket.h> #include <netinet/in.h> #include <netdb.h> #include <errno.h> #ifdef __MORPHOS__ #include <sys/filio.h> #elif defined(AROS) #include <sys/ioctl.h> #endif #include <devices/timer.h> #include <proto/exec.h> #include <proto/socket.h> #include <string.h> #include "quakedef.h" #include "sys_net.h" struct SysSocket { int s; }; struct SysNetData { struct Library *SocketBase; struct MsgPort *timerport; struct timerequest *timerrequest; }; #define SocketBase netdata->SocketBase struct SysNetData *Sys_Net_Init() { struct SysNetData *netdata; netdata = AllocVec(sizeof(*netdata), MEMF_ANY); if (netdata) { SocketBase = OpenLibrary("bsdsocket.library", 0); if (SocketBase) { netdata->timerport = CreateMsgPort(); if (netdata->timerport) { netdata->timerrequest = (struct timerequest *)CreateIORequest(netdata->timerport, sizeof(*netdata->timerrequest)); if (netdata->timerrequest) { if (OpenDevice(TIMERNAME, UNIT_MICROHZ, (struct IORequest *)netdata->timerrequest, 0) == 0) { netdata->timerrequest->tr_node.io_Command = TR_ADDREQUEST; netdata->timerrequest->tr_time.tv_secs = 1; netdata->timerrequest->tr_time.tv_micro = 0; SendIO((struct IORequest *)netdata->timerrequest); AbortIO((struct IORequest *)netdata->timerrequest); return netdata; } DeleteIORequest((struct IORequest *)netdata->timerrequest); } DeleteMsgPort(netdata->timerport); } CloseLibrary(SocketBase); } FreeVec(netdata); } return 0; } void Sys_Net_Shutdown(struct SysNetData *netdata) { WaitIO((struct IORequest *)netdata->timerrequest); CloseDevice((struct IORequest *)netdata->timerrequest); DeleteIORequest((struct IORequest *)netdata->timerrequest); DeleteMsgPort(netdata->timerport); CloseLibrary(SocketBase); FreeVec(netdata); } qboolean Sys_Net_ResolveName(struct SysNetData *netdata, const char *name, struct netaddr *address) { struct hostent *remote; remote = gethostbyname(name); if (remote) { address->type = NA_IPV4; *(unsigned int *)address->addr.ipv4.address = *(unsigned int *)remote->h_addr; return true; } return false; } qboolean Sys_Net_ResolveAddress(struct SysNetData *netdata, const struct netaddr *address, char *output, unsigned int outputsize) { struct hostent *remote; struct in_addr addr; if (address->type != NA_IPV4) return false; addr.s_addr = (address->addr.ipv4.address[0]<<24)|(address->addr.ipv4.address[1]<<16)|(address->addr.ipv4.address[2]<<8)|address->addr.ipv4.address[3]; remote = gethostbyaddr((void *)&addr, sizeof(addr), AF_INET); if (remote) { strlcpy(output, remote->h_name, outputsize); return true; } return false; } struct SysSocket *Sys_Net_CreateSocket(struct SysNetData *netdata, enum netaddrtype addrtype) { struct SysSocket *s; int r; int one; one = 1; if (addrtype != NA_IPV4) return 0; s = AllocVec(sizeof(*s), MEMF_ANY); if (s) { s->s = socket(AF_INET, SOCK_DGRAM, 0); if (s->s != -1) { r = IoctlSocket(s->s, FIONBIO, (void *)&one); if (r == 0) { return s; } CloseSocket(s->s); } FreeVec(s); } return 0; } void Sys_Net_DeleteSocket(struct SysNetData *netdata, struct SysSocket *socket) { CloseSocket(socket->s); FreeVec(socket); } qboolean Sys_Net_Bind(struct SysNetData *netdata, struct SysSocket *socket, unsigned short port) { int r; struct sockaddr_in addr; addr.sin_family = AF_INET; addr.sin_port = htons(port); *(unsigned int *)&addr.sin_addr.s_addr = 0; r = bind(socket->s, (struct sockaddr *)&addr, sizeof(addr)); if (r == 0) return true; return false; } int Sys_Net_Send(struct SysNetData *netdata, struct SysSocket *socket, const void *data, int datalen, const struct netaddr *address) { int r; if (address) { struct sockaddr_in addr; addr.sin_family = AF_INET; addr.sin_port = htons(address->addr.ipv4.port); *(unsigned int *)&addr.sin_addr.s_addr = *(unsigned int *)address->addr.ipv4.address; r = sendto(socket->s, data, datalen, 0, (struct sockaddr *)&addr, sizeof(addr)); } else r = send(socket->s, data, datalen, 0); if (r == -1) { if (Errno() == EWOULDBLOCK) return 0; } return r; } int Sys_Net_Receive(struct SysNetData *netdata, struct SysSocket *socket, void *data, int datalen, struct netaddr *address) { int r; if (address) { LONG fromlen; struct sockaddr_in addr; fromlen = sizeof(addr); r = recvfrom(socket->s, data, datalen, 0, (struct sockaddr *)&addr, &fromlen); if (fromlen != sizeof(addr)) return -1; address->type = NA_IPV4; address->addr.ipv4.port = htons(addr.sin_port); *(unsigned int *)address->addr.ipv4.address = *(unsigned int *)&addr.sin_addr.s_addr; } else r = recv(socket->s, data, datalen, 0); if (r == -1) { if (Errno() == EWOULDBLOCK) return 0; } return r; } void Sys_Net_Wait(struct SysNetData *netdata, struct SysSocket *socket, unsigned int timeout_us) { fd_set rfds; ULONG sigmask; WaitIO((struct IORequest *)netdata->timerrequest); if (SetSignal(0, 0) & (1<<netdata->timerport->mp_SigBit)) Wait(1<<netdata->timerport->mp_SigBit); FD_ZERO(&rfds); FD_SET(socket->s, &rfds); netdata->timerrequest->tr_node.io_Command = TR_ADDREQUEST; netdata->timerrequest->tr_time.tv_secs = timeout_us / 1000000; netdata->timerrequest->tr_time.tv_micro = timeout_us % 1000000; SendIO((struct IORequest *)netdata->timerrequest); sigmask = 1<<netdata->timerport->mp_SigBit; WaitSelect(socket->s + 1, &rfds, 0, 0, 0, &sigmask); AbortIO((struct IORequest *)netdata->timerrequest); }
Reference[edit]
The first is about breaking some api level functionality to make way for supporting IPv6. Would it be a great deal to fix the existing functions that assume IPv4 addresses, so they take generic structures instead? miami.library provides the missing API. Miami was going the same way those back then. IPv6 looks like a failed experiment. No one is seriously deploying it. In fact not much needs to be replaced. It affects mainly resolver, introducing inet_PtoN() and inet_NtoP(). The rest is perfectly handled by existing socket API. Just add AF_INET6.
How big a task would it be to replace the existing bsd-based internals of AROSTCP with more upto date code? Take recent NetBSD and you're done. FreeBSD diverted a lot. This will give you working sysctl and BPF/PCAP. AROSTCP already has some upgraded code. Its codebase is NetBSD v2.0.5. BTW, userland (like resolver) from FreeBSD perfectly fits in. One more idea: separate protocols into loadable modules. This would give you AF_UNIX, AF_BT, etc. There is a single-linked list where all protocols are registered. Nodes are looked up using AF_xxx as a key. Then there are several pointers to functions. That's all. This is very well separate-able. In the end you would have protocol modules with these functions exported, and something like bsdcore.library, implementing basic lowlevel stuff like mbufs and bsd_malloc(). Ah, yes, there is some remainder (IIRC) from original BSD4.3, where AF_xxx are indexes into array. I was lazy enough not to remove this. So, for now both lookup mechanisms are coexisting. Throw away the old one, use NetBSD code as a reference. In fact BSD is very modular at source code level, the same as Linux. You can switch on and off any part without affecting others. So, porting the code isn't really difficult.
Would like to propose a 'unified file descriptor' model for arosc.library, to simplify porting of POSIX applications. The goal is to be able to use the same 'int fd' type to represent both the DOS BPTR filehandles and bsdsocket.library socket backend. This will eliminate the need to rewrite code to use send() instead of write() to sockets, and call CloseSocket() instead of close() for sockets.
Implementation would be as a table (indexed by fd #) that would have the following struct:
struct arosc_fd { APTR file; struct arosc_fd_operation { /* Required for all FDs */ int (*close)(APTR file); LONG (*sigmask)(APTR file); /* Gets the IO signal mask for poll(2)ing */ ssize_t (*write)(APTR file, const void *buff, size_t len); ssize_t (*read)(APTR file, void *buff, size_t len); ..etc etc... /* The following are optional operations */ off_t (*lseek)(APTR file, off_t offset, int whence); ssize_t (*sendto)(APTR file, const void *buff, size_t len, int flags, const struct sockaddr *dest_addr, socklen_t addrlen); ssize_t (*recvfrom)(APTR file, void *buff, size_t len, int flags, struct sockaddr *src_addr, socklen_t *addrlen); ssize_t (*sendmsg)(APTR file, const struct msghdr *msg, int flags); ssize_t (*recvmsg)(APTR file, struct msghdr *msg, int flags); ..etc etc.. } *op; } *fd_table;
The C library routines 'open()' and 'socket()' would be modified to:
- Ensure that either dos.library or bsdsocket.library is opened.
- Allocate a new fd in the FD table
- Set fd_table[fd].ops to a 'struct arosc_fd_operation' pointer, that defines the IO operations for either dos.library for bsdsocket.library
- Set fd_table[fd].file to a pointer to the private data for those operation (ie BPTR or socket fd #)
- Return the new FD number
The read()/write()/etc. would be modified to use the fd_table[fd].op routines to perform IO
The poll() routine will wait for activity on the OR of all the signal masks of the the provided FDs.
The close() routine also deallocates the arosc_fd entry
It's a layer on top of it. bsdsocket.library would be opened dynamically from arosc.library's socket() routine if needed.
since arosc generated FD's will be different from bsdsocket.library ones how will programmers know what to pass to functions? Programmers would not use the bsdsocket headers nor protos directly (similar arosc.library headers would be generated that wrap them). The point is that programmers would no longer need to treat bsdsocket fd as a special case.
the whole fd_table[] mapping concept. The 'file' element of the fd_table[n] would be a BPTR for a open()ed fd, and a bsdsocket.library socket descriptor number for one created by arosc's socket() call. The user would *not* be calling bsdsocket.library directly under the proposed scheme. Everything in that library would be wrapped by arosc.library.
make sure that those codes will continue using bsdsocket functions and won't switch half arosc / half bsdsocket (for example arosc->send and bsdsocket->CloseLibrary). The 'best' solution would be to have a compilation flag (ie
'#define BSDSOCKET_API') that ifdefs out the arosc send()/recv()/socket()/etc prototypes and makes send()/recv()/CloseSocket() and friends from the bsdsocket protos visible.
If BSDSOCKET_API is not defined, but bsdsocket.library is intended (ie calls to CloseSocket(), etc), then the user will get compilation errors about missing function prototypes. If BSDSOCKET_API is defined, but POSIX is intended, then they will get a link error about 'undefined symbol SocketLibrary'
Maybe the bsdsocket includes should also have some detection of arosc includes and produce the warning about the BSDSOCKET_API define beeing accessible to developers if both headers are included in the same compilation unit - I'm just wondering how we are going to get the word out to 3rd party developers that they don't have to change their codes and can simply use the define - or maybe it should be the other way around? bsdsocket by default and POSIX_SOCKET_API define to enable the support you mention?
|
http://en.wikibooks.org/wiki/Aros/Developer/Docs/Libraries/BSDsocket
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
Checking In: Jeff Wilcox - Writing the WP7 App Platform in C# and C++
- Posted: Apr 20, 2011 at 12:41 PM
- 39,056 Views
- 18 Comments
Something went wrong getting user information from Channel 9
Something went wrong getting user information from MSDN
Something went wrong getting the Visual Studio Achievements
Right click “Save as…”
Jeff Wilcox is a developer on the Silverlight team. He spends a lot of his time (~80%) coding in C++. Strange? Of course not... Silverlight is a portable managed runtime and C++ (C with classes in this case) is what enables Silverlight portability. Of course,. Jeff is a code-cranking machine and a very talented software engineer. Code on, Jeff!
What will Erik ask Jeff? What rabbit holes will we jump into?
This way, Alice. No, that way.
Keep cranking out great code, Jeff...and checking it Charles for the interesting questions related to C++.
Now that we know a native framework exists on the phone and is used for some Microsoft apps, I think a lot of developers are gonna ask you to release it.
Android and iOS (obviously) already have excellent support for native code, now is your turn
@Charles thank you for asking the question about "Splash",
and, Zune client is based on Splash ? on PC ? wow, so its not CE only !
p.s. does Mary Jo Foley know about this name ?
but it seems like Silverlight runtime is really a mess, with code from everywhere and hacked together.
@LordKain: Thanks. To be clear, the WP7 OS is largely native (as you'd expect). The native interfaces for programming the OS are not available to the public in this release, as you know. There are very good reasons fror this... I can't and won't speak for the WP7 team about if/when they will provide native access. Sorry. I am not qualified to do so.
I can see how our conversation might have confused you in this regard since I asked a question about Silverlight on Windows CE (which is XAML over C++). My apologies.
C
and the link on HTML5/CSS3 turing complete
well, I checked out the ui framwork of Zune software, its called 'Microsoft UIX Framework',and turns out, its managed code with namespace "Microsoft.Iris", at least for the PC version that Zune uses, and its said that the WP7 version is native code, but is it the same as the 'Silverlight for Windows Embedded' / "Embedded XAML Runtime" in CE ?
it seems like, the Splash/Iris/UIX is 'Microsoft internal' only, while 'Silverlight for Embedded' is open API. if they are not the same thing, then the reasons he stated are not accurate, and actually you CAN edit your 'Silverlight for Windows Embedded' xaml in Blend.
@Charles: There's no confusion, I just hope that one day we'll get the ability to write native code for the phone. Discussing about Silverlight on Windows CE in this context is in my opinion a step forward
@LordKain: Baby steps. Inch by inch
C
@LordKain You can submit the request for a native API to
I did a cursory search and didn't see it as a request. I would be curious to hear what your specific need for native is.
Regarding native access on WP7, this was never announced at Mix, but they did list the InteropServices.dll as part of the new API available with the Mango release.
The InteropServices.dll is the dll that allow you to call into native code (it's actually already there today but isn't officially supported by Microsoft).
@RichMiles: code sharing between the different platforms (WP7, Android, iOS) is something important in my eyes and this can be done to a large extent by using portable languages.
Leveraging existing code is another key factor, and there are literally billions of lines of existing, perfectly-working C/C++ code...
Thanks for the link!
WHO all helped i was one of them. HELLO
Buena la charla pero mas entendible es en Spanish not speak in inglish..¡¡¡¡¡¡¡¡
Charles: I can't and won't speak for the WP7 team about if/when they will provide native access. Sorry. I am not qualified to do so.
We need this
Great interview. At first I was surprised by the amount of C++ in Silverlight, but of course it makes sense after the discussion about the various platforms.
I'm curious about the reaction to Rx ... no pun intended. What's the problem? I know what my issues are ... mainly in creaing complex Observable.Create methods and dealing with schedulers.
why all channel9 videos does not have resume option.When
I try to download using Download manager , it shows "Resume Option is not available". It is not the case earlier
Thanks for watching folks. Note: XAML is def. NOT Turing complete :-)
i am erfan from iran
very goooood
i am erfan from iran
very goooood
Remove this comment
Remove this threadclose
|
http://channel9.msdn.com/Shows/Checking-In-with-Erik-Meijer/Checking-In-Jeff-Wilcox-Writing-the-WP7-App-Platform-in-C-and-C
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
How To Make a 2.5D Game With Unity Tutorial: Part 1
A while back, you guys said you wanted a tutorial on “How To Make a 2.5D” game.” You guys wanted it, you got it!
If you don’t know what a 2.5D game is, it’s basically a 3D game that you squish so the gameplay is only along a 2D axis.
Some good examples are Super Mario Brothers Wii, Little Big Planet, or Paper Monsters.
One great way to make 2.5D games (or 3D games too!) is via a popular, easy, and affordable game development tool named Unity.
So in this tutorial series, I’m going to show you how you can use Unity to create a simple 2.5D game called “Shark Bomber!”
If you’re new to Unity but would like to learn it, this tutorial is for you! You’ll learn how to make a simple game from scratch and learn a ton along the way.
In this game, you take control of a small but (well armed) airplane, and your job is to bomb the evil sharks while protecting the lovely clownfishes!
Unity doesn’t use Objective-C, so for this tutorial, you don’t need any Objective-C experience. However, some experience with an OO language is a plus – ideally C#, Java, or Actionscript.
Keep in mind that this is a Mac-users tutorial – Windows users might not find it accurate for their setups. Also, keep in mind you will test on an iOS device only (not the simulator) – so make sure you have a device ready to work with!
OK, so let’s dive into Unity – but be sure to avoid those sharks! :]
Installing Unity
First let’s install the Unity editor – if you already have it on your mac just skip this step.
Download Unity from its download page. Once you have the DMG file, mount it and start the Unity installer, after a standard installation procedure you will have a /Applications/Unity folder where the binaries are located.
Start Unity, and click the “Register” button (don’t worry, you can try it out for free). Select “Internet activation”, click Next, and fill in the form on the web page that appears.
Important: For this tutorial, you need to choose the “Start Pro / iOS Trial” option so you can publish to the iPhone (not the plain “Free” option!)
After registration completes, Unity should start up and you should see a window that looks something like this:
Close down the “Welcome to Unity” popup, go to File\New Project, choose a folder somewhere on your disk and name the project SharkBomber. Make sure all the packages are unselected, and click Create Project.
Now you’re at a blank slate. Wow there are a lot of buttons, eh? Don’t worry – in the next section we’ll go over it bit by bit.
Unity Editor Overview
Let’s do some additional setup to get things into a known configuration.
In the top right-hand corner of the application window you’ll find a select box – select “Tall” from the list. This will rearrange the window contents (the default was “Wide” FYI).
Now find the tab in the top left corner (just below the tool bar) saying “Game”- drag it near the bottom of the window until you see indication it’ll snap to the bottom and drop it there.
Now you should see the layout from the picture below:
Let’s go quickly over the different panels:
- Scene: Here you move your 3D models around and can browse your 3D world.
- Game: This is what your selected camera (in this case the main camera) sees – real-time until you use the editor as well; in this strip your game runs when you hit “Run” and you can test your game.
- Hierarchy: Your objects’ tree (much like the HTML DOM for example), currently you have only a camera, but we’ll add some stuff here later on; the objects in this list are currently present on the scene.
- Project: This is the contents of your project, your assets, audio files, everything you will be using now or later on.
- Inspector: Here you see all the properties of the selected object in the scene and you can adjust them; unique about Unity is that the Inspector is alive when you run your scene so it’s your debugger too!
- Toolbar: Here you have the tools to interact with the objects in your scene and the Run and Pause buttons to test your scene.
In your Unity3D project you can have many different scenes and you can switch between them. Currently you have one empty scene open in the editor. Let’s save it.
- Right-click inside the Project panel and choose “Create/Folder” – a new folder appears.
- Rename it to “Scenes” – you can do this by single left click on the folder name, or by selecting the folder and pressing “Enter”.
- Now from the main menu choose “File/Save scene” – navigate the save dialogue to [your project directory]/Assets/Scenes and save the scene as “LevelScene”.
Phew! OK – that’s done. Let’s check – in your Project panel open the Scenes folder – there’s your LevelScene scene. Cool!
Now we are ready to run the game – hit the Play button on top! Not much changes – but in fact your game is running inside the Game panel! Don’t forget to stop the game by clicking the Play button again (this is important!)
Setting up an iPhone Unity3D project
One of the nice things about Unity is that it can build games for iPhone, Mac, Wii and other platforms. In this tutorial we’ll be building an iPhone game so we need to setup some details first.
From the menu, choose “File/Build Settings” and click the “Add current” button to add the currently selected scene to the project. You can see when it’s added that it’s got an index of “0”, which means it’ll be the first scene to be loaded when the game starts. This is exactly what we want.
From the Platform list select iOS and click “Switch platform” button. The unity logo appears now in the iOS row.
This is all the setup we need for now, click “Player settings” and close this popup window. You’ll notice the Player settings opened in the Inspector, we need to set couple of things in here too.
In the “Per platform” strip make sure the tab showing an iPhone is selected, like so:
There’s lot of settings here, you know most of them from Xcode, so you can play and explore yourself later on.
Now use this inspector to make the following changes:
- In the “Resolution and Presentation” strip for “Default orientation” select “Landscape Left”.
- In the “Other settings” strip for Bundle Identifier put in whatever you want (except the default value)
- In the “Other settings” strip set the Target device to “iPhone only”.
One final touch: To the left under the “Game” tab now you have available different orientations/resolutions – select “iPhone Wide(480×320)” to match the default landscape orientation.
Congrats – you now have a basic “Hello World” project that you can try out on your iPhone!
Running the Game on Your iPhone
To test everything we did up to now, we’re going to finish by testing the project in Xcode and your iPhone.
Startup your favorite Xcode version – close the welcome screen if there’s one and switch back to Unity. This is a trick how to tell Unity which Xcode version to use – just have it running aside.
Back in Unity, from the menu choose “File\Build&Run”- this will popup again the Build Settings, click “Build and Run” button.
You will be asked where do you want to save your Xcode project (they don’t really say that, but this is what they are asking). Inside your project directory create a folder called “SharkBomberXcode” (this is where your Xcode stuff will reside) and as file name put in “SharkBomber”.
After few moments the project is built and you will have Xcode window popup with a project opened called Unity-iPhone. What happened is that Unity has generated the source code of an Xcode project and you have now this generated project ready to build&run from Xcode.
You might wanna have a look at the source code – but it’s actually a boilerplate which loads the mono framework included as few chunky dll files and some assets, so there’s not much you can play with.
You have 2 targets, so make sure your iOS device is plugged in and select the “Unity-iPhone” target and your device. (I can’t make the Simulator target run, if you do great, but for now I’ll stick to the device).
Moment of truth – hit the Run button, and your Unity project now runs on your iPhone!
Good job, you can see the default Unity start screen and then just the blue background of your scene (and the words “trial version” in a corner).
Stop the Run task, switch back to Unity, and save your project.
Setting up the Scene
First let’s setup the main camera on the scene. Select “Main Camera” in “Hierarchy”, in the Inspector find “Projection” and set it to “Orthographic”, “Size” set to “10”, in Clipping Planes set “Near” to “0.5” and Far to “22”. Now you see a box nearby your camera inside the scene- this is the bounds of what will be seen on the screen from your world.
Notice we’ve set the camera projection to “Orthographic”- this means the depth coordinate won’t affect how things look on the screen – we’ll be effectively creating a 2D game. For the moment let’s work like that until you get used to Unity, then we’ll switch to 3D projection.
Set your camera position (in the Inspector) X, Y and Z to [0,0,0]. Note from now on when I write position to [x,y,z], just set the 3 values in the 3 boxes for that property.
Right-click in the Project panel and again choose “Create/Folder”, call the new folder “Textures”. Then download this background image I’ve put together for the game and save it somewhere on your disc. Drag the background image from Finder and drop it onto the “Textures” folder you just created.
It takes good 20 seconds on my iMac to finish the import, but when it’s done do open the folder, select the “background” texture, and in the inspector look at the texture’s properties. At the very bottom in the preview panel it says “RGB Compressed PVRTC 4bits.” Hmmm, so Unity figured out we’re importing a texture and compressed it on the go – sweet!
From the menu choose “GameObject\Create other\Plane” and you will see a blue rectangle next to the camera. This is the plane we just added to the scene; we’re going to apply the texture we’ve got to it.
Select “Plane” in the Hieararchy panel, in the Inspector in the text field at the top where it says “Plane” enter “Background”. This changes the object’s name, this is how you rename an object. Drag the “background” texture from the Project panel and drop it onto the “Background” object in Hierarchy. Set the position of the plane in the Inspector to [4, 0, 20], the Rotation to [90, 180, 0] and Scale to [10, 1, 5] – this as you see in the “Scene” panel scales and rotates the plane so that it faces the camera – this way the camera will see the plane as the game’s background.
Now in order to see clearly what we have on the scene we’ll need some light (much as in real life) – choose “GameObject\Create other\Directional Light” – this will put some light on your scene. Select Directional Light in “Hierarchy” and set the following Position coordinates [0, 0, 0].
Now we have all the setup and the background of the scene, it’s time to add some objects and make them move around!
Adding 3D Objects to the Scene
From the menu choose “GameObject\Create other\Cube” – this adds a cube object to your scene. This will be the game player, so rename it to “Player”.
Set the following position: [-15, 5.3, 8]. You’ll see the cube appearing near the left side of the screen in the “Game” panel – this is where our plane will start and will move over the sea surface to the other end of the screen.
Now let’s import the plane model! We’ll be using free 3D models produced by Reiner “Tiles” Prokein and released for free on his website (also have a look at his license for the models).
To start, download his airplane model and unarchive the contents.
Right-click inside the “Project” panel and choose “Create/Folder”, rename the folder to “Models”. From the the folder where you unarchived the plane model drag the file “airplane_linnen_mesh.obj” and drop it onto the “Models” folder in the “Project” panel.
Then right-click the “Models” folder and choose “Create/Folder”, rename the new subfolder to “Textures” – here we’ll save the textures applied to models. Drag the file “airplane_linnen_tex.bmp” and drop it onto the newly created “Textures” folder.
Next, select the “Player” object in the “Hierarchy” panel, look in the “Inspector” – the “Mesh Filter” strip is the filter which sets your object’s geometry (right now it sets a cube geometry); on the row saying “Mesh – Cube” find the little circle with a dot in the middle and click it – this opens a popup where you should double click the plane model and this will change your object’s geometry to an airplane.
Now one fine detail – the airplane looks a bit messed up. I’m no 3D expert, but I found what fixes this in Unity: select “airplane_linen_mesh” in “Project” panel then in the “Inspector” find “Normals” and select “Calculate”, then scroll down and click “Apply” button.
Cool – now you see the smooth airplane in the scene! Let’s apply also its texture: Drag “airplane_linnen_tex” texture from your “Project” panel and drop it onto “Player” in the “Hierarchy” panel. Unity automatically applies the texture to the airplane model we have on the scene.
Final touches for the airplane: to the “Player” object set Rotation to [0, 90, 350] and Scale to [0.7, 0.7, 0.7], this will rotate and scale the plane so it looks like flying just over the sea surface.
This game might not be Call of Duty quite yet, but stay tuned, because in the next section we’re going to make our airplane fly! :D
Beginning Unity3D programming with C#
As you’ve already seen in the Build Settings dialogue Unity can build your project to a Wii game, an iPhone game, standalone Mac game, and so forth. Because Unity is so omnipotent it needs some kind of intermediate layer where you can program your game once; and each different build translates it to platform specific code.
So oddly enough, to program in Unity you will be using C# (not Objective-C!), and when Unity generates your Xcode project it will translate that C# code to platform specific code automatically!
Right-click inside the “Project” panel and choose “Create/Folder”, rename the new folder to “Class”. Right-click the “Class” folder and choose “Create/C Sharp Script”, rename the new file to “PlayerClass”. Right-click inside the “Project” panel and choose “Sync MonoDevelop Project”- this opens the MonoDevelop IDE – this is the IDE where you can program in C#.
Note: MonoDevelop is a program ported from Linux, as you can see by the user interface skin called Gnome, so it’s normal if it crashes every now and then, especially when you try to resize the window. If that happens, just start it again by clicking “Sync MonoDevelop Project”.
Here are the three major areas in the MonoDevelop GUI:
- Your MonoDevelop project browser – in Assets/Class you will find your PlayerClass.cs file.
- The currently open class outline
- Editor area – there’s some syntax highlighting and some auto-complete which will help you coding.
Find your PlayerClass.cs file in the project browser and double click it to open it in the editor. Make sure that the class looks like the following:
The clause “using” includes the given libraries and frameworks, UnityEngine is the library which gives you access to things like iPhone’s accelerometer, keyboard input and other handy stuff.
You define your new class and inherit from MonoBehaviour – this gives you lots of stuff for free: you can override a long list of methods which are called when given events are triggered.
Just few lines below you have empty Start and Update methods – these are 2 important events.
- Start is called when your object appears on the scene so you can make your initialization (much like viewDidAppear: in UIController).
- Update is called when every frame is rendered (i.e. could be 30, 60 or 100 times a second, you never know how often) and here’s where you do your movements, game logic, etc.
Now let’s switch back to Unity for a moment. We’ll want to make the airplane fly over the sea and when it goes out of the right side of the screen to appear again from the left side. Let’s measure at what position we need to move the airplane to the left. In the “Scene” panel at the top right corner you see the orientation gizmo – click on the X handle (it’s a kind of red cone, I’ll call it handle):
This will rotate the scene and orient it horizontally towards you. Click again the handle which is now on the left side of the gizmo – this will rotate the scene around, you might need to click the left handle few times until you see the scene frontal like this:
Now you can use the mouse scroll wheel to zoom in/zoom out on the scene and fit it inside the “Scene” panel. Make sure the move tool is selected in the toolbar above and select the “Player” in the “Hierarchy”.
Now you see that a new gizmo appeared attached to the airplane with a green and red arrows. Now, you can drag the arrows and they will move the airplane in the axis the arrows represent:
What you need to do is grab the red arrow (horizontal axis) and drag the airplane to the right until it goes out of the “Game” panel below.
So start dragging inside the “Scene” panel, but while looking at the “Game” panel. Leave the airplane just outside the visible screen and have a look at its position in the “Inspector”.
X should be now around “17.25” – so this is the right bound of the screen, you can drag the plane left and you’ll see the left bound is about “-17.25″, so we’ll use “18” and “-18″ to wrap the airplane flight. Bring back the airplane to just about the left side of the screen where it was before.
Switch back to MonoDevelop, and make the following changes to PlayerClass.cs:
As you already guessed, you just declared a public property on your class called “speed”, but what is unique to Unity is that all public class properties are also accessible via… the “Inspector” panel (ta-daaaah)!
So you can set the values of your class properties from the IDE and you can monitor the values of your properties while the game runs in real time – for free – how cool is that?!
The “transform” variable is a property on every game object (and everything inside a scene is a game object) which handles the object’s position in space: rotation, position, scale, etc. So on every Update call to translate the object’s position so that it moves to the right of the screen.
We can’t just move the plane a set amount per call to Update, because nobody knows how many times this will actually be called per second. Instead, we define the speed to be units per second, and divide the speed by the amount of time elapsed since the last call to Update (Time.deltaTime). This way the object moves always with the same speed independently from the current frame rate.
The call to Translate takes 3 values – the translation it has to do on each axis. You probably notice that we are moving the airplane on the Z axis (3rd parameter) – we have to do that because we rotated the plane on the scene, so translating the Z axis moves it to the right from the perspective of the player!
Look at the “if” statement – we check if transform.position.x is bigger than 18 (remember why?) and if so, we set the airplane’s position to same coordinates but “-18″ on the X axis. We use new Vector3(x,y,z) to set the position – we’ll be using a lot of these vectors for all positioning; you notice also we set a random speed between 8 and 12 – this is just to make the plane move more randomly to keep things interesting.
At this point we are ready to see the airplane move!
Switch back to Unity. Drag the “PlayerClass” from “Project” panel onto “Player” in the “Hierarchy” panel – this way you attach the class to a game object. Select “Player” in “Hierarchy” and look in the “Inspector”- you’ll see a new strip appeared called “Player Class (Script)” where you also see your public property! yay! Set a value of “12” for it.
OK. Ready? Hit that Play button! Woot! You can see in both “Scene” and “Game” panels that the airplane flies around and when goes out of the right comes back from the left side. Also notice in the “Inspector” the X value of position is alive as well – it shows you where the plane is at any given moment. Also Speed changes to random values every time the plane flight wraps.
Once you’re done enjoying this coolness, don’t forget to hit Play again to stop the game.
Next up, time to give this plane a formidable foe – a menacing shark! Now that you’re familiar with the basics, things will go faster since we won’t be doing anything new for a while.
Need a break? We’ve covered a lot here! So if you get tired or just need to have a break, no problem – just save your Unity project and you can open it later. But! When you open a Unity project it opens by default an empty scene. To load the scene you are working on – double click “LevelScene” in the “Project” panel – now you can continue working.
Jumping the Shark
Go ahead and download and unarchive the Shark model. As you did before with the airplane, drag the file “shark.obj” onto the “Project” panel inside the “Models” folder and “sharktexture.bmp” inside “Models/Textures”.
From the menu choose “GameObject/Create other/Capsule” – rename the “Capsule” object in “Hierarchy” to “Shark”. In “Inspector” inside the “Mesh Filter” strip click the circle with dot inside and in the popup window double click the Shark model. Now you should see the Shark geometry in the “Scene” and “Game” panels.
Drag “sharktexture” from “Project” panel (it’s inside Models/Textures) onto the “Shark” object in “Hierarchy” – this gives your shark a vigorous mouth and some evil eyes! Pfew – I already want to bomb it!
Make sure “Shark” is selected and set the following properties in the “Inspector”: Position – [20, -3, 8], Scale – [1.2, 1.2, 1.2] – this will put the shark just right off the camera visible box – it’ll start moving from there towards the left side of the screen.
Now since we’d want the shark to interact with our bombs (by exploding, mwhahahahah!) we want the Collider of the shark to match more or less the shark’s geometry. As you see there’s a green capsule attached to the shark inside the scene. This is the shark’s collider. Let’s have it matching this evil predator’s body.
In the “Inspector” find the “Capsule Collider” strip and set the following values: Radius to “1”, Height to “5”, Direction “X-Axis”, Center to [0, 0, 0]. Now you see the capsule collider has rotated and matches more or less the shark’s body – better!
Last – select “shark” model in “Project” panel “Models” folder then in the “Inspector” find “Normals” and select “Calculate”, then scroll down and click “Apply” button.
Right-click inside the “Project” panel in “Class” folder and choose “Create/C Sharp Script”, rename the new script to FishClass. Righ-click and choose “Sync MonoDevelop Project”.
MonoDevelop will pop up. Open the FishClass.cs file and put inside the following code:
It’s pretty similar to what we have already for the airplane. We have a speed property (per second) and in the Update event handler we use transform.Translate to move the shark.
Notice this time I used:
This is just to demo that some of these methods can take different parameters – however passing separately 3 values or 1 vector is pretty much the same.
Now let’s see what the shark does when it reaches the bounds of the screen (-30 and 30 in this case, so there’s is a moment when the shark is offscreen, so you can’t easily ambush it when entering back).
When the shark reaches left or right bound it turns around moves a bit towards the bound and changes speed. This way it just goes back and forth, back and forth continuously.
The call to transform.Rotate(new Vector3(x,y,z)) obviously rotates the object around the axis by the given values, and transform.Translate(new Vector3(x,y,z)) you already know well.
Pretty easy! Switch back to Unity and drag the “FishClass” script onto the “Shark” object in “Hierarchy”. Now hit Play: you can see the huge shark going back and forth waiting to be bombed. Good job!
Adding the Clown Fish
Let’s do the same procedure again for our ClownFish. I’ll put it into a nice list for quick reference:
- Download and unarchive the ClownFish model.
- Drag “mesh_clownfish.obj” into “Project” panel inside “Models” folder and “clownfish.bmp” inside “Models/Textures”.
- Choose “GameObject/Create other/Capsule” and rename the “Capsule” object in “Hierarchy” to “ClownFish”.
- Click “Mesh Filter” circle-with-a-dot-button and from the popup double click the clownfish geometry.
- Drag the “clownfish” model texture onto “ClownFish” object in “Hierarchy”.
- While having “ClownFish” selected change the following properties in the “Inspector”:
- Position to [-20, -1, 7]
- Rotation to [0, 180, 0]
- Scale to [0.4, 0.3, 0.3]
- Radius to “4”
- Height to “4”
- Direction to “Z-axis”
- Center to [0, 0, 0].
Hit Play and see what happens – now you have two moving fish without having to write any extra code!
Everything works perfect – the fish go back and forth, plane is wrapping. We need some boooombs!
Set Us Up The Bomb
Download and unarchive the Can model. As usual, drag the “colourcan.obj” file in “Project” panel “Models” folder and “cantex.bmp” file in “Models/Textures”.
From the menu “GameObject/Create Other/Capsule”, rename the object to “Bomb”. From the Mesh Filter popup double click the can geometry. Drag the “cantex” texture onto the “Bomb” object in “Hierarchy”. In “Inspector” “Capsule collider” click this button to open the popup menu:
When the popup menu appears, choose “Reset” – this way the collider automatically will take the size of the geometry assigned – cool!
Next select the “colourcan” model in “Project” panel “Models” folder then in the “Inspector” find “Normals” and select “Calculate”, then scroll down and click “Apply” button.
Now let’s dive into new stuff! Select the bomb object again, and inside the Capsule Collider strip check the “Is Trigger” checkbox – aha! Check this to make the bomb object will trigger event when it collides with other objects.
But for this to happen we need to also assign a Rigid Body to the bomb (as at least one of the colliding objects needs to have a rigid body). From the menu choose “Component/Physics/Rigidbody” (Bomb should be selected in the Hierarchy!).
Once you do this, a new strip should appear in the “Inspector” called “Rigidbody”. Uncheck “Use gravity” (we won’t use gravity) and check “Is Kinematic” to be able to control the body programatically. This was all we needed to have collisions enabled!
Download and save on your disc this bomb releasing sound (which I made myself, lol!)
We would like to play this sound when the airplane releases a bomb, i.e. when the bomb first appears on the screen. Let’s do that!
Right-click in the “Project” panel and choose “Create/Folder”, rename the new folder to “Audio”. Drag the “bahh1.aif” file onto the “Audio” folder. Next drag the “bahh1″ sound file from “Project” panel onto the “Bomb” object in “Hierarchy”.
Believe it or not, that’s all we need to do – the sound is attached to the bomb and will play when the bomb appears on screen. Notice how easy some things are with Unity?
Select the “Bomb” object in “Hierarchy” and in “Inspector” find the “Audio Source” strip: see that “Play On Awake” checkbox is checked – this tells the audio to play when the object appears on the scene. Also look at the “Scene” panel – see the bomb has now a speaker attached?
Prefabricating your Game Objects
Remember that “Hierarchy” shows what’s on the scene currently, and “Project” holds all your object for you? This has to do something with our goal here – have many bombs loaded on the plane and release them at will into the sea.
What we are going to do is – we are going to prefabricate a game object (it will be ready and set to appear on the scene), but we won’t add it to the scene- we are going to instantiate (or clone if you are a sci-fi fan) this “prefab” into a real living game object on the scene.
Right-click inside the “Project” panel and choose “Create/Folder”, rename it to “Prefabs”. Right-click “Prefabs” and choose “Create/Prefab”. Rename the new prefab to “BombPrefab”. Notice the little cube icon is white – this indicates an empty prefab.
Now – drag the “Bomb” from Hierarchy onto the “BombPrefab” in “Project”. Notice the cube icon is now blue – means a full prefab, ready to be cloned. Also important – look at the “Hierarchy” panel now – “Bomb” font changed to blue color – that means this object now is an instance of a prefab.
Now that we have our bomb cookie cutter set, we don’t need the original bomb object on the scene – right-click on “Bomb” in “Hierarchy” and choose “Delete”.
Let’s get coding! Switch to MonoDevelop and open up the PlayerClass.cs. under the “speed” property declaration add:
Have you guessed already? In this property we’ll hold a reference to the BombPrefab and we’ll make instances as we wish. Notice the property type is “GameObject”, as I said earlier everything in the game is a GameObject (much like NSObject in Cocoa), so it’s safe to set that type for just about anything.
Now switch back to Unity and select the “Player”. As you expected in the “Inspector” under “Script” there’s a new property “BombPrefab”. Let’s set its value: drag the “BombPrefab” from “Project” panel onto the “Inspector” where it says “None(GameObject)” and drop – now the field indicates it has BombPrefab prefab attached as value. Cool!
We’re going to need a C# class also for the bomb – right-click inside “Project” in “Class” folder and choose “Create/C Sharp Script”, rename it to “BombClass”. Right-click and “Sync MonoDevelop Project” – MonoDevelop pops up. Open up BombClass.cs and replace the contents with this code:
This is pretty similar to everything we’ve done up to now – we translate the object every frame and when out of the screen bounds we react appropriately. In the bomb case we just want to destroy the object since we can always make new ones from our bomb prefab.
In the code note that “this” refers to the C# bomb class, while the gameObject property refers to the object on the scene – so we destroy the object on the scene and all components attached to it. We’ll have a look at the game object hierarchy in Part 2 when you access components attached to an object programatically.
Bombing that shark!
Finally the part you’ve been waiting for – gratuitous violence! :]
Open up PlayerClass.cs. At the end of the Update method add:
Let’s go over this line by line.
- Input is the class giving you access to keyboard, mouse, accelerometer and touches. Input.anyKeyDown is true when a key is pressed, this happens only once – i.e. when the button was first pressed; then Input.anyKeyDown is false again until another key is pressed. anyKeyDown is actually a handy abstraction – it actually is true when a mouse button was clicked, keyboard key was pressed or (!) a tap on the iPhone’s screen was received.
- (GameObject)Instantiate(bombPrefab) is the magic line that creates an instance from a prefab and adds it to the scene.
- Finally we set the position of the bomb to be same as the airplane’s.
Cool – we have our bomb created when the player taps the screen, it then starts to fall down and when out of the screen destroys itself.
Let’s give it a try! Switch back to Unity and hit Play – now if you click inside the “Game” panel (simulating a tap) you will see a bomb is created where the plane is at that point.
Click many times – many bombs are created. You should also hear the bomb sounds. But the bombs don’t fall down! Why? Can you figure out what’s the problem by yourselves already? Answer after the break.
I hope you figured it out, but here’s what’s the problem: you haven’t yet assigned the BombClass script to the Bomb Prefab – that’s why the bombs don’t fall down. Drag “BombClass” from your “Project” panel “Class” folder onto “BombPrefab” in “Prefabs” folder in the same panel. Check in the “Inspector” if you see the Bomb Class Script strip. Now hit Play again. That’s better!
Still not perfect though – those sharks don’t die when you hit them. Since we already configured the colliders and the rigidbody component of the bomb, we just need to add the code to react to collision. Switch back to MonoDevelop and add this new method to the BombClass:
Let’s go over the code line by line again:
- OnTriggerEnter is a method called when the body attached collides with another body, and the 2nd body is passed as a parameter.
- Here we check if the object the bomb hits is called “Shark”.
- If the shark is hit, then 1st reset the object rotation.
- 2nd, we reset the shark back to its original position.
- Finally, we call destroy this.gameObject to make the bomb disappear from the scene.
Pretty easy again! Isn’t it? That’s about all you need – switch back to Unity, and run your game! Hit sharks disappear and new ones come in.
You can also choose from the menu “File/Build&Run” and when Xcode pops up hit “Run” in Xcode – now you have the game on your iPhone – sweet!
Where To Go From Here?
Here is a sample project with all of the code from the above tutorial.
I hope you amused yourselves and learned a ton of new things – in Part 2 of the series we’re going to spice up things a notch and make this game really challenging and fun!
In the meantime, if you have any questions about this tutorial or Unity in general, please join the forum discussion below!
15 Comments
Awesome Tutor thank you
I would have preferred 2.5D for Cocos2D but you can't please everyone. Someone always has to complain LOL
Thanks.
Sample project is not complete, right?
I failed to see the background in the Game view. Do I need to change Render Settings for this game?
Thanks!
But one question : can we insert 3d animated models like a shark with a moving tail when he's swimming for more realistic gameplay.
Thx again for all your great tutorials.
P.S : There is a nice application we can found on the AppStore for simulating our games from Unity, it's called : Unity remote, so if you don't have a developing profile for testing the game on your iphone you can use this app, it's very easy to use.
Inserting animations within Unity is, from the little knowledge I have of the engine, quite simple. Check out where they have dozens of hours of free tutorials on Unity, they are very short and easy to pick up
I saw on your wife's blog that you are going to do a tutorial on how to make a catapult game that uses box2d physics similar to angry birds. I'm working on a game with similar functionality and just having a time. I've got lots of programming experience but am totally new to cocos2d and box2d and iphone/ipad development in general. I could really use that tutorial. Do you have any idea when the tutorial will be ready? Any available code at this time? Specifically I'm interested in the parallax scrolling of the background, attaching the catapult to the world and how to move it with the revolute joint. Any help you can give me would be greatly appreciated.
Thanks,
Bob Hunt
Thanks Ray, I'm really looking forward to the tutorial. I love your book also. It's really helped me a lot.
Take care,
Bob Hunt
Is it possible to work on the Project in Unity from a Windows PC, then copy the project files over to a Mac and then build it for the iPhone?
My MBP is quite old and very slow, I just had it crash on me when running Unity so I would rather try it on my workstation. But the deploying to my iPhone went fine. So I hope I can just copy the whole project accross (or place it on a NAS from the beginning).
I do have a problem: after the first collision happens (the bomb is destroyed and the shark is reset), subsequent bombs do not collide with the shark. Why is that?
The nifty thing about the repo is that you can build both Android apks and iPhone ipas using command line tools. It's a nice start to automating the build.
To do so (assuming a mac with xcode):
1) git clone
2) cd unity-shark-bomber
3) source scripts/envsetup.sh
4) hmm # this will give you help
5) sb-build-android # builds android apk (you may need to tweak the script a bit for your env)
6) sb-build-ios # builds ipa (you'll need to provide some env vars first. Run the script and it'll tell you what you need)
enjoy!
The tutorial works just fine for me.
|
http://www.raywenderlich.com/4551/how-to-make-a-2-5d-game-with-unity-tutorial-part-1
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
how can i use chart with NetBeansid
i wanted to draw a graph with buttons n lines between them..
i hv written a code using paint but it displays first line and then buttons,by the time lines disappear...please help me out.
These examples and explanations have helped me so much! I managed to learn how to graph in Java very quickly thanks to these. Thanks again! :)
Post your Comment
Chart & Graphs Tutorials in Java
Chart & Graphs Tutorials in Java
... in Java. Our Chart and Graphs
tutorials will help learn everything you need to learn about chart and graphs
programming in Java. We have provided many examples
graphs/charts - JSP-Servlet
graphs/charts hi,
How to create a graph/chart in java by reading...* ;
import org.jfree.data.jdbc.JDBCCategoryDataset;
public class Chart{
public... chart";
JDBCCategoryDataset dataset=new JDBCCategoryDataset("jdbc:mysql
Good Looking Java Charts and Graphs
Good Looking Java Charts and Graphs Is there a java chart library that will generate charts and graphs with the quality of visifire or fusion charts?
The JFreeChart graph quality is not professional looking. Unless it can
Drawing Graphs - Swing AWT
Drawing Graphs hi i am doing a small project in java swings . there is a database table having two columns A,B i want to plot a graph to that values in database the graph must be interactive graph
Multiline graphs in java - Java Beginners
Multiline graphs in java How to draw a multiline graph in java, One... JFreeChart chart = createChart(dataset);
final ChartPanel chartPanel = new ChartPanel(chart);
chartPanel.setPreferredSize(new java.awt.Dimension(500
Graphs using JFreeChart - Java Beginners
Graphs using JFreeChart Hi Friend,
I need to draw a graph... chart = ChartFactory.createTimeSeriesChart(
"Power consumed...
);
ChartPanel chartPanel = new ChartPanel(chart, false
Types of Graphs and Charts
applications charts and graphs are extensively
used for visualizing the data.
In Java... Types of Graphs and Charts
... the data and call the few methods to generate the charts or graphs
of
java tutorials
java tutorials Hi,
Much appreciated response. i am looking for the links of java tutorials which describes both core and advanced java concepts... java in detail with relevant explanations and examples systematically ,it would
RDF Tutorials
RDF Tutorials
Generate RDF file in Java
In this example we... generates a RDF
file with the use of "Jena". Jena is a java API
Graphs - Struts
implementation, I would suggest Google charts.
graphs in jsp - Java3D
graphs in jsp i want to present my data from database in graphs how can i present in jsp and servlet.please guide me.thanz in advance
java chartwks February 26, 2012 at 2:34 PM
how can i use chart with NetBeansid
graph draw problemashwini April 3, 2012 at 11:03 PM
i wanted to draw a graph with buttons n lines between them.. i hv written a code using paint but it displays first line and then buttons,by the time lines disappear...please help me out.
Wow, these are so very very helpful.mordechai serraf April 9, 2013 at 3:27 AM
These examples and explanations have helped me so much! I managed to learn how to graph in Java very quickly thanks to these. Thanks again! :)
Post your Comment
|
http://www.roseindia.net/discussion/18123-Chart-&-Graphs-Tutorials-in-Java.html
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
list of subsets from N elements
luc comeau
Ranch Hand
Joined: Jan 20, 2005
Posts: 97
posted
Feb 21, 2005 10:45:00
0
Hi everyone, first time posting in this particular forum, hope i chose the right place
Ok so i have this problem; I am given a vector containing Strings in them say for example,"name","Telephone","address"..etc (personal information bascially).
To simplify the problem i think maybe ill just make each
string
in the array correspond to a letter in the alphabet ie.name=a,telephone=b,address=c.
Ok so say i am passed a vector with "a","b","c", i need to be able to create all the sets of these, without repeating any letters in a set.So basically my answer to this should just be;
a,b,c,ab,ac,bc,abc
But my method to do this must also be able to handle any amount of inputs, not just three, so i could have a,b,c,d,e...then must be able to find all the sets of those as well.
I started this using for loops but i think i began to realize that it didnt work for N inputs, because each time i wanted more inputs i needed more for loops, so its bascially useless what ive done so far.Maybe recursion or something might be useful, anyone have any ideas?
Thanks for patiently reading, hope to hear some suggestions!
National Research Council<br />Internet Logic Department
Stephen Huey
Ranch Hand
Joined: Jul 15, 2003
Posts: 618
posted
Feb 21, 2005 12:19:00
0
Sounds like a school assignment...yeah, I'd say go for recursion. If you need to double check your work (make sure you haven't added the same thing twice to a set), you might try using the java.util.Set class (or rather, maybe
java.util.HashSet
or something like that).
luc comeau
Ranch Hand
Joined: Jan 20, 2005
Posts: 97
posted
Feb 21, 2005 13:13:00
0
hey,
lol, its not a school assignment at all, haha, i wish it was, im actually writing a program at work, this is a minor part to it but its essential for forward progress in the project.I asked a few people here at work and they say they may have some notes from their Algorithms and Data structures classes that might be able to help me, but we'll see i supose.
I infact am not a huge fan of recursion, cause it actually requires more effort to think up the logic for it,(poor me eh) haha but i guess i can give it a try.As for the java.util.set class and the hashSet, ill give them a looking at, still any other ideas would be much appreciated!
thanks again
[ February 21, 2005: Message edited by: luc comeau ]
Igor Stojanovic
Ranch Hand
Joined: Feb 18, 2005
Posts: 58
posted
Feb 21, 2005 13:59:00
0
Hi luc,
I think you should use Varargs (
Methods That Allow Variable-Length Parameter Lists
)
The following class defines a method that accepts a vararg, and invokes the same method passing it first several arguments and then fewer arguments. The arguments come in as an array.
public class Varargs { public static void main(String...args) { Varargs test = new Varargs(); //call the method System.out.println(test.min(9,6,7,4,6,5,2)); System.out.println(test.min(5,1,8,6)); } /*define the method that can accept any number of arguments. */ public int min(Integer...args) { boolean first = true; Integer min = args[0]; for (Integer i : args) { if (first) { min = i; first = false; } else if (min.compareTo(i) > 0) { min = i; } } return min; } } The output is 2 1
When combined with the for each loop, varargs are very easy to use.
Note that varargs are new to SDK 5.0, so you need to compile for that version of
Java
to use them.
kind regards
Igor
[ February 21, 2005: Message edited by: Igor Stojanovic ]
Michael Dunn
Ranch Hand
Joined: Jun 09, 2003
Posts: 4632
posted
Feb 21, 2005 14:13:00
0
class CombinationsAll { String startCombo = "abcd"; public CombinationsAll() { for(int x = 1; x <= startCombo.length(); x++) { int[] arr = new int[x]; for(int y = 0; y < x; y++) arr[y] = y; getStartCombos(arr); } System.exit(0); } private void getStartCombos(int arr[]) { String thisCombo = ""; for(int x = 0; x < arr.length ; x++) thisCombo += startCombo.charAt(arr[x]); getCombos("",thisCombo); if(arr[0] == (startCombo.length()-1)-(arr.length-1-0)) return; if(arr[arr.length-1] == startCombo.length()-1) { for(int i = 0; i < arr.length;i++) { if(arr[i] == (startCombo.length()-1)-(arr.length-1-i)) { arr[i-1]++; for(int ii = i; ii < arr.length; ii++) { arr[ii] = arr[ii-1] + 1; } break; } } } else { arr[arr.length-1]++; } getStartCombos(arr); } private void getCombos(String str1, String str2) { int i = str2.length()-1; if(i < 1) System.out.println(str1+str2); else for(int ii = 0; ii <= i; ii++) getCombos(str1 + str2.substring(ii,ii+1), str2.substring(0, ii) + str2.substring(str2.length() - (i-ii))); } public static void main(String[] args) {new CombinationsAll();} }
luc comeau
Ranch Hand
Joined: Jan 20, 2005
Posts: 97
posted
Feb 22, 2005 06:03:00
0
hi everyone
Igor, thanks for the effort with the Varargs, i should have mentioned before that i am not using 1.5, i have started this project before it was released so im using 1.4.2, but it i cant figure something out i will consider that, thanks
Michael,I gave your code a run and the only thing that appears wrong withit is that it is writing out all "permutations" of the String "abcd"(well its not really wrong but in my situation it is),i infact need the combinations of "abcd" where ordering doesnt matter.So where your program counts ab,ba as two different elements, i infact would only count them as one, like i wouldnt consider them at all as being different.
Im going to try and trace through your code and see if i can figure out how to change this, but if its easy to fix and repost please feel free, but thanks for the time u took to post that in the first place, much appreciated !
P.s- has anyone ever heard of the "Banker's Sequence"
iv done a little research and aparently this sequence is the fastest way to approach this problem, but iv yet to find any implementation of it in java.
It does the subsets int he way you would suspect
if you had abcd, its would orde them like this
{a,b,c,d},{ab,ac,ad},{bc,bd},{cd},{abc,abd},{acd},{bcd},{abcd}
just a thought, maybe this might strike some more ideas.
[ February 22, 2005: Message edited by: luc comeau ]
Michael Dunn
Ranch Hand
Joined: Jun 09, 2003
Posts: 4632
posted
Feb 22, 2005 14:27:00
0
try this one then
class CombinationSets { String set = "abcd"; public CombinationSets() { for(int x = 1; x <= set.length(); x++) { int[] array = new int[x]; for(int y = 0; y < x; y++) array[y] = y; printSets(array); } } public void printSets(int[] arr) { String thisCombo = ""; while(true) { thisCombo = ""; for(int x = 0; x < arr.length ; x++) thisCombo += set.charAt(arr[x]); System.out.println(thisCombo); if(arr[0] == (set.length()-1)-(arr.length-1)) break; if(arr[arr.length-1] == set.length()-1) { for(int i = 0; i < arr.length;i++) { if(arr[i] == (set.length()-1)-(arr.length-1-i)) { arr[i-1]++; for(int ii = i; ii < arr.length; ii++) arr[ii] = arr[ii-1] + 1; break; } } } else arr[arr.length-1]++; } } public static void main(String[] args) {new CombinationSets();} }
David Harkness
Ranch Hand
Joined: Aug 07, 2003
Posts: 1646
posted
Feb 22, 2005 15:52:00
0
If it helps, realize that what you're effectively doing is counting in binary. Each element in the original list is a bit representing whether it is in or out of any given subset, so you're writing out all possible bit combinations. If there were three elements, the subsets would be
000 -- you probably want to omit the empty subset 001 010 011 100 101 110 111
You could use a recursive algorithm if the number of elements is small enough, but you can certainly rewrite it non-recursively.
Sorry, I only have time for that much right now, but hopefully it can give you enough to solve it.
luc comeau
Ranch Hand
Joined: Jan 20, 2005
Posts: 97
posted
Feb 23, 2005 06:52:00
0
hi everyone again,
David, i first would like to say thats exactly what i was working on yesterday, seems that i always come up with the same ideas as you as u post them, I wrote up some code that does the binary representation of them, for the sake of helping others wanting to do this ill post the code, or if you were intrested;
public static void output(int[] string, int position) { int[] temp_string = new int[length]; int index = 0; int i; for (i = 0; i < length; i++) { if ((index < position) && (string[index] == i)) { temp_string[i] = 1; index++; } else temp_string[i] = 0; } for (i = 0; i < length; i++) System.out.println( temp_string[i]); if (i%length==0) System.out.println("\r"); } public static void generate(int[] string, int position, int positions) { if (position < positions) { if (position == 0) { for (int i = 0; i < length; i++) { string[position] = i; generate(string, position + 1, positions); } } else { for (int i = string[position - 1] + 1; i < length; i++) { string[position] = i; generate(string, position + 1, positions); } } } else output(string, positions); } public static void main (String[] args) { length = 3; /*this is hard coded for now but can be any value */ for (int i = 0; i <= length; i++) { int[] string = new int[length]; generate(string, 0, i); } System.exit(0); }
I havn't yet decided what i want to do with these binary representations, but for the most part this thread has accomplished what i asked !!
Thanks
As for Michael, ill give your code a try and see if maybe i can find some way to incorporate your idea it may be useful for a later stage of my coding, non the less, thanks so much !
[ February 23, 2005: Message edited by: luc comeau ]
Luke Nezda
Greenhorn
Joined: Jun 23, 2008
Posts: 1
posted
Jun 23, 2008 21:08:00
0
Last solution offered is basically a copy/paste from the C++ code from the December 2000 academic paper "Efficiently Enumerating the Subsets of a Set" by J. Loughry, J.I. van Hemert, and L. Schoofs if anyone is interested in more details on the Banker's Sequence and this implementation by its original authors.
I agree. Here's the link:
subject: creating a list of subsets from N elements
Similar Threads
Beta Update
Vector within vector
Typical/Built-in way to do validate different sets of parameters?
ClassCastException ?
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton
|
http://www.coderanch.com/t/375951/java/java/creating-list-subsets-elements
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
Tk_GetPixmap, Tk_FreePixmap - allocate and free pixmaps
#include <tk.h>
Pixmap Tk_GetPixmap(display, d, width, height, depth)
Tk_FreePixmap(display, pixmap)
X display for the pixmap.
Pixmap or window where the new pixmap will be used for drawing.
Width of pixmap.
Height of pixmap.
Number of bits per pixel in pixmap.
Pixmap to destroy..
pixmap, resource identifier
|
http://search.cpan.org/~srezic/Tk-804.030_501/pod/pTk/GetPixmap.pod
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
java code to upload and download a file - Java Beginners
java code to upload and download a file Is their any java code to upload a file and download a file from databse,
My requirement is how can i... and Download file upload in struts - Struts
java file upload in struts i need code for upload and download file using struts in flex.plese help me Hi Friend,
Please visit the following links:
http
upload and download mp3
upload and download mp3 code of upload and download Mp3 file in mysql database using jsp
and plz mention which data type used to store mp3 file in mysql database
Upload and Download multiple files
Upload and Download multiple files Hello Sir/Madam,
I need a simple code for upload and download multiple files(it may be image,doc... link:
Photo upload, Download
Photo upload, Download Hi
I am using NetBeans IDE for developing application for java(Swings). And i am using MySQL as backend database.
My... dont know whether this code proper) . My question is how can i load
file upload download - Java Beginners
file upload download how to upload and download files from one system to another using java.io.* and java.net.* only please send me code
Struts File Upload and Save
Struts File Upload and Save
... regarding "Struts file
upload example". It does not contain any... example will provide you
with the code to upload the file ,in the upload
Based on struts Upload - Struts
Based on struts Upload hi,
i can upload the file in struts but i want the example how to delete uploaded file.Can you please give the code
upload and download a file - Java Beginners
upload and download a file how to upload a file into project folder in eclipse and how to download the same using jsp
Hi Friend,
Try the following code:
1)page.jsp:
Display file upload form to the user
Upload and download file - JSP-Servlet
Upload and download file What is JSP code to upload and download a document in a web page? Hi Friend,
Try the following code to upload............
Now to download the word document file, try the following code
how to upload and download images in java?
how to upload and download images in java? what is the code....
Upload and Download images:
1)page.jsp:
<%@ page language="java" %>
<HTML>
<HEAD><TITLE>Display file upload form
File Upload And download JSP Urgent - JSP-Servlet
File Upload And download JSP Urgent Respected Sir/Madam,
I... Download in JSP.. In the Admin window, There must be "Upload" provision where admin can upload files.. And in the user window, There must be a "Download" provision
Download Struts
in the source code of the Struts
framework you can download the source code from
http...Learn how to Download Struts for application development or just for learning the new version of Struts.
This video tutorial shows you how you can download
download
download how to download the file in server using php
code:
"<form method="post" action="">
enter the folder name<input type="text" name... the wrong foldername";
end1:
}
?>
</form>`print("code sample
upload video
upload video hi sir i want to code of upload and download video in mysql database using jsp...
and plz give advice which data type is used to store video in table
FileUpload and Download
FileUpload and Download Hello sir/madam,
I need a simple code for File upload and Download in jsp using sql server,that uploaded file should... be download with its full content...can you please help me... im in urgent i have
upload and download video
upload and download video how to upload and download video in mysql...;Display file upload form to the user</TITLE></HEAD>
<BODY>
<FORM...;center><td colspan="2"><p align="center"><B>UPLOAD THE FILE<
upload and download files from ftp server using servlet - Ajax
upload and download files from ftp server using servlet Hi,Sir... for upload and download files from ftp server using servlet and how to use servlet for these applications.I have send my code to you previous time
File Download in jsp
File Download in jsp file upload code is working can u plz provide me file download
upload and download files - JSP-Servlet
upload and download files HI!!
how can I upload (more than 1 file) and download the files using jsp.
Is any lib folders to be pasted? kindly... and download files in JSP visit to :
Struts File Upload Example
Struts File Upload Example
...
is the heart of the struts file upload application. This interface represents a
file... to upload is as
follows:
<%@ taglib uri="/tags/struts-bean
how to upload and download images using buttons in jsp?
how to upload and download images using buttons in jsp? how to upload and download images using buttons in jsp
Download and Build from Source
Of An Application
Download Source Code...
Download and Build from Source
Shopping cart application developed using Struts 2.2.1 and MySQL can be
downloaded from
Struts file uploading - Struts
Struts file uploading Hi all,
My application I am uploading files using Struts FormFile.
Below is the code.
NewDocumentForm... can again download the same file in future.
It is working fine when I
Struts File Upload Example - Struts
Struts File Upload Example hi,
when i tried the struts file upload example(Author is Deepak) from the following URL
i have succeeded. but when i try to upload file
download code from database
download code from database how to download files from database
php download file code
php download file code PHP code to download the files from the remote server
Photo Upload - JSP-Servlet
Photo Upload Dear Sir,
I want some help you i.e code for image upload and download using Servle/jsp.
Thanks&Regards,
VijayaBabu.M ... = 0;
//this loop converting the uploaded file into byte code
FTP File Upload in Java
easy to write code for FTP File
Upload in Java. I am assuming that you have FTP... a connection to FTP Server and perform
upload/download/move/delete operations... is client.login(userName, password);
Code to upload the 2 File Upload error
Struts 2 File Upload error Hi! I am trying implement a file upload using Struts 2, I use this article, but now the server response the error... Upload In Struts2
Thanks
java/jsp code to download a video
java/jsp code to download a video how can i download a video using jsp/servlet
Download and Installing Struts 2
Download Struts 2.0
In this section we will download and install the Struts 2.0 on the latest
version... development server. Then we will download Struts 2.0 and install the
struts
download excel
download excel hi i create an excel file but i don't i know how to give download link to that excel file please give me any code or steps to give download link
Downloading Struts & Hibernate
;
In this we will download Struts & Hibernate....
Download Struts
The latest release of Struts can be downloaded from http... libext under "C:\Struts-Hibernate-Integration\code\WEB-INF\"
. We
FileUpload and Download
coding for Upload and download file, but it is not stored in database and also it s not download the file with its content... just it download doc with 0 Bytes...;
<HTML>
<HEAD><TITLE>Display file upload form to the user<
file download
file download I uploaded a file and saved the path in database. Now i want to download it can u plz provide
HTML Upload
HTML Upload Hi,
I want to upload a word / excel document using the html code (web interface)need to get displayed on another webpage. Please let me the coding to display on another webpage using code - Struts
struts code In STRUTS FRAMEWORK
we have a login form with fields
USERNAME:
In this admin
can login and also narmal uses can log...://
Thanks
How to download JDK source code?
How to download JDK source code?
This articles explains you how you can download the source code of JDK. You
will learn how to download JDK source code... to download JDK source code?
JDK installer is packaged with the binary of the JDK
how to distinguish engines having same code - Struts
how to distinguish engines having same code hi we are using struts... and it is kept in central repository.user can change the file and again upload... introduced a new engine for testing purpose it has almost same code as previous engine
image upload
to database
The given code allow the user to browse and upload selected file...image upload Hello sir I want to upload image or any other type... be upload in the server and their path should be stored in database either
download code using servlet - Servlet Interview Questions
download code using servlet How to download a file from web to our system using Servlet
upload a image
upload a image sir,how can i upload a image into a specified folder using jsp Hi Friend,Try the following code:1)page.jsp:<p>&...;<HEAD><TITLE>Display file upload form to the user<
Jsp Upload
an error when Uploading to an database Mysql
the code is given below</p>...;
<p>else { </p>
<p>out.println("unsucessfull to upload...;<TITLE>Display file upload form to the user</TITLE></HEAD>
Upload image
Upload image Hai i beginner of Java ME i want code to capture QR Code image and send to the server and display value in Mobile Screen i want code..., here is a code:
package Example;
import javax.microedition.midlet.*;
import
Ajax File Upload Example
;
Download Complete Source Code...Ajax File Upload Example
This application illustrates how to upload a file using
upload image
upload image how can i retreive image from mysql using jsp... problem is when i use the retreival code,it displays only a small box instead of image at the screen.The retreival and uplaod code is here..
i.upload.jsp
download java source code for offline referral
download java source code for offline referral how can i download complete java tutorial from rose india so that i can refer them offline
struts2 - Struts
struts2 hello, am trying to create a struts 2 application that
allows you to upload and download files from your server, it has been challenging for me, can some one help Hi Friend,
Please visit the following
Excel File data upload into postgresql database in struts 1.x
Excel File data upload into postgresql database in struts 1.x Dear members please explain how Excel Files data upload into postgresql database in struts
1.x
To scan a image and upload to server
To scan a image and upload to server I am beginner of JavaME I want a code to scan a image and upload to server
File Upload Servlet 3.0 Example
as :
Download Source Code...File Upload Servlet 3.0 Example
In this tutorial you will learn how to upload... annotation to upload the file. Earlier versions than the servlet 3.0
specification were
Spring 2.5 MVC File Upload
server. You can download the Spring MVC file upload example code at the
end... folder that use for validation file upload .The code of
"... as :
Download Code
Download example code
How to download, compile and test the tutorials using ant.
the application on tomcat:
Download the code and then extract it using
WinZip...
Compiling and running Struts 2.1.8 examples with ant
In this section we will learn how to download
|
http://www.roseindia.net/tutorialhelp/comment/16898
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
I've been working on the Arduino copy of optiboot, and have a new version that could use some testing.Hex file (for m328) attached. Source and stuff at
Code space reduction (alas, more features show up to consume the saved space)380 optiboot has problems uploading sketches bigger than about 30 KB
// If we are in RWW section, immediately start page erase#if !defined(BLUECONTROLLER) if ((uint8_t)(address >> 8) < (uint8_t)(NRWWSTART >> 8)) 7eb0: 8d 2d mov r24, r13 7eb2: 99 27 eor r25, r25 7eb4: 08 2f mov r16, r24 7eb6: 80 37 cpi r24, 0x70 ; 112 7eb8: 20 f4 brcc .+8 ; 0x7ec2 <main+0xc2> __boot_page_erase_short((uint16_t)(void*)address); 7eba: 83 e0 ldi r24, 0x03 ; 3 7ebc: f6 01 movw r30, r12 7ebe: 87 bf out 0x37, r24 ; 55 7ec0: e8 95 spm 7ec2: c2 e0 ldi r28, 0x02 ; 2 7ec4: d1 e0 ldi r29, 0x01 ; 1#endif [...] // If we are in NRWW section, page erase has to be delayed until now. // Todo: Take RAMPZ into account#if !defined(BLUECONTROLLER) if (!((uint8_t)(address >> 8) < (uint8_t)(NRWWSTART >> 8))) 7ece: 00 37 cpi r16, 0x70 ; 112 7ed0: 20 f0 brcs .+8 ; 0x7eda <main+0xda>#endif __boot_page_erase_short((uint16_t)(void*)address); 7ed2: 83 e0 ldi r24, 0x03 ; 3 7ed4: f6 01 movw r30, r12 7ed6: 87 bf out 0x37, r24 ; 55 7ed8: e8 95 spm }#endif
Is there a reason using GitHub instead of Google code?
Comparing the time with and without overlapping erase gave no difference.
The Uno was over TWICE as fast as the FTDI-based Arduino, with everything else being the same
# Am I right in thinking that the bootloader exits if it detects any corrupt serial communication, on the assumption that the baud-rate is wrong and therefore the data is not coming from a "normal" AVRdude upload? Should we expect any communication data to be lost in this process, or will the application be able to re-read the corrupt character at the correct baud rate?
# I see the version numbers are put into the last two bytes of the bootloader block, and are now accessible both to the application code and via the serial port. But how do I query the version numbers over the serial port?
Actually, it must detect nothing but corrupt data for the entire bootloader timeout (~1s) to exit. I'm not sure that that's a great algorithm, but it seems to be working OK.
### default case of main loop: else { // This covers the response to commands like STK_ENTER_PROGMODE verifySpace(); }### will start app if the next character is != CRC_EOPvoid verifySpace() { if (getch() != CRC_EOP) { watchdogConfig(WATCHDOG_16MS); // shorten WD timeout while (1) // and busy-loop so that WD causes ; // a reset and app start.
#if defined(BLUECONTROLLER) if(ch == STK_GET_SYNC) { // this is the initial sequence, sent by avrdude verifySpace(); blueCAvrdudeSynced = 1; // ignore all errors as long as this is 0 } else #endifvoid verifySpace() { if (getch() == CRC_EOP) { putch(STK_INSYNC); } else {#if defined(BLUECONTROLLER) // ignore error when not synced, otherwise some initial garbage will exit the bootloader if(blueCAvrdudeSynced) appStart();#else appStart();#endif }}
Needs more linux testing.
No, one character which doesn't fit in the protocol is enough:
I think your patch would make that worse, assuming the watchdog is still always reset in getch(). Normally, getting to the sketch promptly when the character stream doesn't look like bootloader commands is a good thing. Any ideas on resolving this incompatibility?
|
http://forum.arduino.cc/index.php?topic=64105.msg468857
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
Regex.Replace Method (String, String, String)
In a specified input string, replaces all strings that match a specified regular expression with a specified replacement.
- replacement
- Type: System.String
The replacement string.
Return ValueType: System.String
A new string that is identical to the input string, except that the replacement string takes the place of each matched string. If pattern is not matched in the current instance, the method returns the current instance unchanged.. The search for matches starts at the beginning of the input string.
The replacement parameter specifies the string that is to replace each match in input. replacement can consist of any combination of literal text and substitutions. replacing a pattern match is Regex.Replace(String, String, String, RegexOptions, TimeSpan), which lets you set the time-out interval.
The following example. To run the example successfully, you should replace the literal string "MyMachine" with your local machine name.
using System; using System.Text.RegularExpressions; public class Example { public static void Main() { // Get drives available on local computer and form into a single character expression. string[] drives = Environment.GetLogicalDrives(); string driveNames = String.Empty; foreach (string drive in drives) driveNames += drive.Substring(0,1); // Create regular expression pattern dynamically based on local machine information. string pattern = @"\\\\(?i:" + Environment.MachineName + @")(?:\.\w+)*\\((?i:[" + driveNames + @"]))\$"; string replacement = "$1:"; string[] uncPaths = { @"\\MyMachine.domain1.mycompany.com\C$\ThingsToDo.txt", @"\\MyMachine\c$\ThingsToDo.txt", @"\\MyMachine\d$\documents\mydocument.docx" }; foreach (string uncPath in uncPaths) { Console.WriteLine("Input string: " + uncPath); Console.WriteLine("Returned string: " + Regex.Replace(uncPath, pattern, replacement)); Console.WriteLine(); } } } // The example displays the following output if run on a machine whose name is // MyMachine: // Input string: \\MyMachine.domain1.mycompany.com\C$\ThingsToTo.txt // Returned string: C:\ThingsToDo.txt // // Input string: \\MyMachine\c$\ThingsToDo.txt // Returned string: c:\ThingsToDo.txt // // Input string: \\MyMachine\d$\documents\mydocument.docx // Returned string: d:\documents\mydocument.docx
The regular expression pattern is defined by the following expression:
"\\\\(?i:" + Environment.MachineName + ")(?:\.\w+)*\\((?i:[" + driveNames + "]))\$"
The following table shows how the regular expression pattern is interpreted.
The replacement pattern $1 replaces the entire match with the first captured subexpression. That is, it replaces the UNC machine and drive name with the drive.
|
http://msdn.microsoft.com/en-US/library/e7f5w83z.aspx
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
This article discusses my journey toward having a web page run inside of a WinForm client and fire events back to it. The sample app which is included provides the implementation of this article. While the sample application could have been done in an easier fashion (all ASP.NET for example) the intent is to provide a working model that can implement more difficult designs.
The application consists of a WinForm application which is broken into two sections:
The application implements the IDocHostUIHandler interface which will allow MSHTML to do callbacks into my code. It then displays the webpage inside the IE container. When a button is clicked, a javascript component is called with a text description of what was clicked on. The javascript passes this data to the external method which will populate the form with data. If data is not found, the purchase button is disabled.
IDocHostUIHandler
To make the application work on your machine, alter the application config file (SampleEventProgram.exe.config) in the BIN\Release directory and change the path to where the sampleweb application is located on your machine.
Not being a raw, C++ COM developer - or even knowing much about the internals of the IE engine - I began my deployment by researching on the Internet. The first day I searched the web for anything on putting IE inside a form. I bring this up because it led me to a webpage which appeared to be a BLOG. I read it out of interest to see why the heck it showed up in my search. In it, the author said:
"In my growing years of development, I have had several unanswered questions arise. [...] Why is it so hard to implement a web browser inside a windows form and take control of it?"
I probably should have taken this as a warning, but I plunged forward in my quest. After all, MS had years to improve this proces....right? Over here at CodeProject, I came upon a discussion thread that died with no conclusive help, as well as articles by Wiley Techology Publishing and Nikhil Dabas. The first article was well written, but the most important part of the piece (implementing IDocHostUIHandler and ICustomDoc) were taken offline and done in Delphi! Nikhil's article, however, had a fine discussion on implementing the interface as well as a deployed DLL for the interfaces in his sample application!
ICustomDoc
However, his deployment for hooking events required that you know the specific controls on the webform and then sink their click events. It also did not allow the web app to send any information back to the winform client. This is great for having the click events dive directly into code. But I needed the HTML object to tell me some information about what was clicked. So while I finally got IDocHostUIHandler implemented, I still did not get my last piece done and working. I was stuck for weeks in a continuous result of 'object null or not an object'.
I had a few hints such as looking into GetExternal and I could SWEAR that a post suggested using window.getExternal in my javascript. Obviously that didn't get me very far since I have since learned that is not a valid javascript call. I also got some suggestions on implementing IDispatch. But nothing really seemed to take the final step of scripting my program.
GetExternal
window.getExternal
IDispatch
A lengthy two-day discussion with CP member .S.Rod. finally led to a better understanding and a great assistance in getting everything tied together and working. The most interesting thing with all of this research is that I talked to maybe four different people and got four different implementation approaches. I am sure that in each of those, the person in the discussion had an approach that eventually worked for them. Unfortunately, it was not until my final discussion that I had one that got me past the null object problem.
The only other drawback to all of this research was that I found I was occasionally killing myself by taking input from several people, combining it all together, and having conflicts with what was already done. To make matters worse, I was given a new computer in the middle of all of this and spent two days getting everything back to normal! It was just when I was ready to walk away from this project for awhile that .S.Rod. was kind enough to pull everything together for me. Here are the final results, and a sample application to help guide others in their quest to control IE.
For this application I am going to have a webpage present buttons and graphics for a product catalog. Clicking on a button in the webpage will populate the form with descriptions and activate the purchase button. Clicking on the purchase button in the winform will send Lefty Larduchi to your front door for some money. My first step was just to build the webpage (just plain HTML and javascript) and get it to a point of displaying stuff.
Creating the form is not a problem. Just start a new C# Windows Form project, customize the toolbar and add Internet Explorer from the COM side of the fence. The form consists of a panel docked to the left, a slider, IE docked to the right, and two textboxes and a button that are inside the panel.
Now one of the first steps in taking control of IE is to implement IDocHostUIHandler. Nikhil wrote a fine article on doing this, so I won't duplicate his efforts. You can cover that first step here.
Make sure you keep track of the MSHtmHstInterop.dll piece of his sample application. I used the sample app to copy and paste the base IDocHostUIHandler implementations into my form.
So after implementing IDocHostUIHandler, what else needs to be done? Well, in Nikhil's article his example would require that you know the controls that will be clicked and that someone click on that control. This is the code that accomplishes that:
private void WebBrowser_DocumentComplete(object sender,
AxSHDocVw.DWebBrowserEvents2_DocumentCompleteEvent e)
{
IHTMLDocument2 doc = (IHTMLDocument2)this.WebBrowser.Document;
HTMLButtonElement button = (HTMLButtonElement)
doc.all.item("theButton", null);
((HTMLButtonElementEvents2_Event)button).onclick += new
HTMLButtonElementEvents2_onclickEventHandler(this.Button_onclick);
}
I had to face an application requirement where we were showing major sections, with each section being just DHTML, each section had to provide me information about itself and then have the WinForm act upon that information. I found it interesting to find in all the numerous articles I read on this subject that Outlook deploys this WinForm/IE merge - just not in .NET!
In this example, we are using the javascript object window.external to interact with the form. So when a user clicks on a section it will fire a method in the script area. That method, via window.external, issues a call through MSHTML to the IDocHostUIHandler.GetExternal method, then uses the IDispatch methods to get the address of the method and call it. This next section is quoted from a discussion with .S.Rod. I couldn't describe it better:
window.external
IDocHostUIHandler.GetExternal
ICustomDoc.SetUIHandler(object)
IUnknown
window.external.mymethod()
IDocHostUIHandler.GetExternal(out object ppDispatch)
GetIdOfName()
Invoke()
[InterfaceType(ComInterfaceType.InterfaceIsIDispatch)]
In the end, we have a sample html file, which reacts on clicks by calling javascript's window.external.MyMethod(). In order for this to work, the afore mentioned object must be declared and implement the MyMethod() method. In the sample application, that method name is
window.external.MyMethod()
MyMethod()
public void PopulateWindow(string selectedPageItem).
It should be important to note at this point that any method which will interact at the COM level should be defined to always return void. If there is need to return data, that is done via the parameters with the return parameters marked as out. If there is a need to return an error, for example, that is done by setting an HRESULT via System.Runtime.InteropServices. Setting the HRESULT is done in C# by doing a
HRESULT
System.Runtime.InteropServices
throw new ComException("", returnValue)
returnValue is an int value defined somewhere in your class, and is set to the value you want to raise.
returnValue
int
In the sample application, the first step to exposing an object via IDispatch is to create the custom interface:
[InterfaceType(ComInterfaceType.InterfaceIsIDispatch)]
interface void ICallUIHandler
{
void PopulateWindow(string selectedPageItem)
}
Then we implement the interface in a class definition:
public class PopulateClass:IPopulateWindow
{
SampleEventProgram.Form1 myOwner;
/// <SUMMARY>
/// Requires a handle to the owning form
/// </SUMMARY>
public PopulateClass(SampleEventProgram.Form1 ownerForm)
{
myOwner = ownerForm;
}
/// <SUMMARY>
/// Looks up the string passed and populates the form
/// </SUMMARY>
public void PopulateWindow(string itemSelected)
{
// insert logic here
}
}
So what we have done here is create an interface that exposes IDispatch, we implemented that interface in the PopulateClass class definition, and we take in the constructor logic a pointer to our form. This give access to the specific fields we choose. I'm going to need the class to be able to change the two textboxes as well as enable the button. So I have to go into the form code and change those three item definitions from private to public. So in these definitions, I have made the following connections:
PopulateClass
Finally I have to implement the last piece of code that will connect my webform to my class I defined above. In the implementation for IDocHostUIHandler.GetExternal I need to set the object passed to an instance of my class. In implementing IDocHostUIHandler, you should have taken the implementation from Nikhil's sample app and cut/paste it into your program. Alter the necessary implementation as follows:
void IDocHostUIHandler.GetExternal(out object ppDispatch)
{
ppDispatch = new PopulateClass(this);
}
This now ties your class to the window.external portion of mshtml, it ties the form to the new class definition, and it readies everything for processing. The class implementation basically acts as a go-between between the two worlds of System.Windows.Forms.Form and Microsoft.MSHTML and your web form. The final step - before I write my code in the PopulateWindow method - is to pick which fields I want my class to access and change their definition from private to public, or to follow better coding standards - add public accessors to those fields. In this sample, I exposed the various elements that were to be changed with public accessors.
System.Windows.Forms.Form
Microsoft.MSHTML
PopulateWindow
private
public
Now that I have a working application as well as a working sample application, I have to wonder why it took so long to pull all of this information together. But now, here it is. In the sample application:
InitializeComponents
When an HTML button is clicked, it calls the method CallHostUI passing it the name of the item clicked.
CallHostUI
window.external.PopulateWindow()
With all of this working I should add a note of warning. I have found that the Visual Designer code does not expect you to have an interface and class definition in front of your form definition. The result is if you add a control or modify a control it visually appears to take, but no change in your code has actually occured and the change disappears once you close and reopen the project. More frustrating is when you add an event handler: you get the binding to the delegate, but no actual base method implementation. Fortunately, all you need to do to work around this is to move your interface and class down to the bottom of your source code.
This can provide a very rich form of client presentation as well as rich WinForm elements for processing data. In my particular example, I'm exposing webpages developed for our internal web UI presentation engine. When each section inside of a web page is moused over, the section is highlighted with a bright yellow border. Clicking on the section passes that information to my WinForm which expresses that section in a properties page display. The various built-in editors in the framework as well as custom editors we write will hook into that properties page to allow for simple modification of data. For example, changing color in a cell element pops up the color picker editor and changing a font pops up the font picker.
|
http://www.codeproject.com/Articles/4163/Hosting-a-webpage-inside-a-Windows-Form?fid=15529&df=90&mpp=10&sort=Position&spc=None&tid=3726144
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
Shouldn't the getter be named "getPName"?
Dave
(pardon brevity, typos, and top-quoting; on cell)
On Nov 2, 2012 1:09 AM, "Maliwei" <mlw5415@gmail.com> wrote:
> As I have desc in the mail title, and see the code below:
>
> /**--------------code start----------**/
> import ognl.Ognl;
> import ognl.OgnlException;
> class Hello {
> private String pName;
> public String getpName() {
> return pName;
> }
> public void setpName(String pName) {
> this.pName = pName;
> }
> }
>
> public class OgnlTest {
> public static void main(String[] args) {
> Hello action = new Hello();
> action.setpName("pName.Foo");
> try {
> Object pName = Ognl.getValue("pName", action);
> System.out.println(pName);
> } catch (OgnlException e) {
> //this will happen when use version 2.7+ and 3.x
> e.printStackTrace();
> }
> }
> }
> /**--------------code end----------**/
>
> According to JavaBeans Spec sec 8.8 "Capitalization of inferred names":
> Thus when we extract a property or event name from the middle of an
> existing Java name, we normally convert the first character to lower case.
> However to support the occasional use of all upper-case names, we check if
> the first two characters of the name are both upper case and if so leave it
> alone. So for example,
> “FooBah” becomes “fooBah”
> “Z” becomes “z”
> “URL” becomes “URL”
> We provide a method Introspector.decapitalize which implements this
> conversion rule.
> String java.beans.Introspector.decapitalize(String name)
>.
> Thus "FooBah" becomes "fooBah" and "X" becomes "x", but "URL" stays as
> "URL".
>
>
>
>
>
> Best Regards
> Ma Liwei
|
http://mail-archives.apache.org/mod_mbox/commons-user/201211.mbox/%3CCADJJoV7uf6GK6AgtahxMzEkyhxPhF7kfMjaVp1oZdeDM3P538A@mail.gmail.com%3E
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
OpenJobObject function
Opens an existing job object.
Syntax
Parameters
- dwDesiredAccess [in]
The access to the job object. This parameter can be one or more of the job object access rights. This access right is checked against any security descriptor for the object.
- bInheritHandles [in]
If this value is TRUE, processes created by this process will inherit the handle. Otherwise, the processes do not inherit this handle.
- lpName [in]
The name of the job to be opened. Name comparisons are case sensitive.
This function can open objects in a private namespace. For more information, see Object Namespaces.
Terminal Services: The name can have a "Global\" or "Local\" prefix to explicitly open the object in the global or session namespace. The remainder of the name can contain any character except the backslash character (\). For more information, see Kernel Object Namespaces.
Return value
If the function succeeds, the return value is a handle to the job. The handle provides the requested access to the job.
If the function fails, the return value is NULL. To get extended error information, call GetLastError.
Remarks
To associate a process with a job, use the AssignProcessToJobObject function.
To compile an application that uses this function, define _WIN32_WINNT as 0x0500 or later. For more information, see Using the Windows Headers.
Requirements
See also
|
http://msdn.microsoft.com/en-us/library/ms684312.aspx
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.