text
stringlengths
8
267k
meta
dict
Q: Eclipse Style Function Completions in Emacs for C, C++ and JAVA? How Do I Get Eclipse Style Function Completions in Emacs for C, C++ and JAVA? I love the power of the Emacs text editor but the lack of an "intellisense" feature leaves me using Eclipse. A: I can only answer your question as one who has not used Eclipse much. But! What if there was a really nice fast heuristic analysis of everything you typed or looked at in your emacs buffers, and you got smart completion over all that everywhere, not just in code? M-x load-library completion M-x global-set-key C-RET complete RET A: When I was doing java development I used to use the: Java Development Environment for Emacs (JDEE) The JDEE will provide method name completion when you explicitly invoke a jdee provided function. It has a keyboard binding for this functionality in the jdee-mode. A: The CEDET package provides completion for C/C++ & Java (and for some other languages). To initial customization you can take my config that i use to work with C++ projects A: Right now, I'm using Auto Complete for Emacs. As a current Visual Studio and ex-Eclipse user, I can say that it rivals both applications quite well. It's still not as good as Microsoft's IntelliSense for C#, but some would say that C++ is notoriously difficult to parse. It leverages the power of (I believe) the Semantic package from Cedet, and I find it feels nicer to use when compared to Smart Complete. It completes C++ members, local variables, etc. It's pretty good. However, it falls down on not being able to complete overloaded methods (it only shows the function once with no parameters, but thats a limitation of Cedet I believe), and other various things. It may improve in future though! By the way, I could be wrong here, but I think you need an EDE project set up for the class member completion to work (just like you would normally with Semantic). I've only ever used it while having an EDE project, so I assume this is true. A: Searching the web I find http://www.emacswiki.org/cgi-bin/wiki/EmacsTags#toc7 describing complete-tab in etags. It is bound to M-Tab by default. This binding may be a problem for you Also, etags has some limits, which may annoy you... The link also points to CEDET as having better symbol completion support. A: M-/ is a quick and dirty autocomplete based on the contents of your current buffer. It won't give you everything you get in Eclipse but is surprisingly powerful. A: Have you tried the emacs plugin for eclipse? http://people.csail.mit.edu/adonovan/hacks/eclipse-emacs.html A: I've written a C++-specific package on top of CEDET that might provide what you want. It provides an Eclipse-like function arguments hint. Overloaded functions are supported both for function arguments hint and for completion. Package is located here: https://github.com/abo-abo/function-args Make sure to check out the nice screenshot: https://raw.github.com/abo-abo/function-args/master/doc/screenshot-1.png A: auto-complete-clang is what you want. Can't go wrong with using an actual C++ compiler for completions. The only problem it has is there's no way to know what -I and -D flags to pass to the compiler. There are packages for emacs that let you declare projects and then you can use that. Personally, I use CMake for all C and C++ work so I wrote some CMake code to pass that information to emacs through directory-local variables. It works, but I'm thinking of writing a package that calls cmake from emacs so there's less intrusion.
{ "language": "en", "url": "https://stackoverflow.com/questions/129257", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: How can I make a framework for quickly building similar, but different, sites? I have the need to build lots of sites that are very similar, but not exactly the same, using ASP.NET 2.0. I'm trying to find the best way to make generating these sites quick and easy. The sites will be used to collect information about a user, and there will be multiple steps for each site. The sites will all collect similar information, but some sites may require less or more information than others. Additionally, certain form fields will need to be populated from different database tables, based on the site. I would like to use a Microsoft patterns & practices solution, but I'm not sure that there is one that fits this scenario. My current thinking is that I will put as much business logic as possible into an external assembly and then write a custom Web user control for each step for each site. I will include these user controls in a master page's Panel control. I don't like this solution because each site will be nearly duplicating the code for the other sites. How can I improve upon this design? The main obstacle is that the sites are similar, but sufficiently different. Thanks! Alex A: you can create base classes which handle all of the common functionality and then have your site specific controls inherit from their respective base classes and then implement their specific implementations. A: We face this problem all the time. What we do is to have a common library that all our sites use, and to bury shared functionality in classes or utility modules in this library. Each site can then use those objects or utility functions as is, or extend the common classes. Keep in mind that these shared classes can include all kinds of things, including code-behind for pages and user controls that you can inherit from and extend. Deciding what goes in the app and what goes in the common library is one of the hardest things about our business, though. Put it in the common library and you lose flexibility; put it in the app and you risk having duplicate code to maintain. If you have a fairly complex database setup, it might be worth your time to come up with a framework for specifying your db schema in XML and having your app enforce that schema and build any additional SQL infrastructure that you need based on that definition (e.g. utility views, stored procedures, etc). We did this and it resulted in a huge productivity boost. A: Have you looked into Monorail (www.castleproject.org) this is an implementation of themvc pattern, similar to Ruby on rails with a few nice view engines, I prefer Nvelocity. from castle project as well you can use n implementation of ActiveRecord that makes life real nice. if you are on that trail also have a look at coln ramsay screencasts . To be honest all ms solutions are real fat another great thing about the castleproject is that is totally open source so you can learn loads from their code A: How about using an Application Framework like DotNetNuke or mojoportal? They both provide flexibility and enable you to develop websites very quickly with common functionality. Leaving you to develop custom modules where the functionality you require may be different. There are also thousands of other modules that can be bought which provide excellent functionality. However we chose to use WCSF and enhanced upon it. All the above mentioned projects are open source and some good examples of code to learn from. I know it may be a late answer but I hope it helps
{ "language": "en", "url": "https://stackoverflow.com/questions/129261", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: CASCADE DELETE just once I have a Postgresql database on which I want to do a few cascading deletes. However, the tables aren't set up with the ON DELETE CASCADE rule. Is there any way I can perform a delete and tell Postgresql to cascade it just this once? Something equivalent to DELETE FROM some_table CASCADE; The answers to this older question make it seem like no such solution exists, but I figured I'd ask this question explicitly just to be sure. A: I cannot comment Palehorse's answer so I added my own answer. Palehorse's logic is ok but efficiency can be bad with big data sets. DELETE FROM some_child_table sct WHERE exists (SELECT FROM some_Table st WHERE sct.some_fk_fiel=st.some_id); DELETE FROM some_table; It is faster if you have indexes on columns and data set is bigger than few records. A: This command will delete all data from all tables that have a foreign key to the specified table, plus everything that foreign keys to those tables, and so on. Proceed with extreme caution. If you really want DELETE FROM some_table CASCADE; which means "remove all rows from table some_table", you can use TRUNCATE instead of DELETE and CASCADE is always supported. However, if you want to use selective delete with a where clause, TRUNCATE is not good enough. USE WITH CARE - This will drop all rows of all tables which have a foreign key constraint on some_table and all tables that have constraints on those tables, etc. Postgres supports CASCADE with TRUNCATE command: TRUNCATE some_table CASCADE; Handily this is transactional (i.e. can be rolled back), although it is not fully isolated from other concurrent transactions, and has several other caveats. Read the docs for details. A: You can use to automate this, you could define the foreign key constraint with ON DELETE CASCADE. I quote the the manual of foreign key constraints: CASCADE specifies that when a referenced row is deleted, row(s) referencing it should be automatically deleted as well. A: I took Joe Love's answer and rewrote it using the IN operator with sub-selects instead of = to make the function faster (according to Hubbitus's suggestion): create or replace function delete_cascade(p_schema varchar, p_table varchar, p_keys varchar, p_subquery varchar default null, p_foreign_keys varchar[] default array[]::varchar[]) returns integer as $$ declare rx record; rd record; v_sql varchar; v_subquery varchar; v_primary_key varchar; v_foreign_key varchar; v_rows integer; recnum integer; begin recnum := 0; select ccu.column_name into v_primary_key from information_schema.table_constraints tc join information_schema.constraint_column_usage AS ccu ON ccu.constraint_name = tc.constraint_name and ccu.constraint_schema=tc.constraint_schema and tc.constraint_type='PRIMARY KEY' and tc.table_name=p_table and tc.table_schema=p_schema; for rx in ( select kcu.table_name as foreign_table_name, kcu.column_name as foreign_column_name, kcu.table_schema foreign_table_schema, kcu2.column_name as foreign_table_primary_key from information_schema.constraint_column_usage ccu join information_schema.table_constraints tc on tc.constraint_name=ccu.constraint_name and tc.constraint_catalog=ccu.constraint_catalog and ccu.constraint_schema=ccu.constraint_schema join information_schema.key_column_usage kcu on kcu.constraint_name=ccu.constraint_name and kcu.constraint_catalog=ccu.constraint_catalog and kcu.constraint_schema=ccu.constraint_schema join information_schema.table_constraints tc2 on tc2.table_name=kcu.table_name and tc2.table_schema=kcu.table_schema join information_schema.key_column_usage kcu2 on kcu2.constraint_name=tc2.constraint_name and kcu2.constraint_catalog=tc2.constraint_catalog and kcu2.constraint_schema=tc2.constraint_schema where ccu.table_name=p_table and ccu.table_schema=p_schema and TC.CONSTRAINT_TYPE='FOREIGN KEY' and tc2.constraint_type='PRIMARY KEY' ) loop v_foreign_key := rx.foreign_table_schema||'.'||rx.foreign_table_name||'.'||rx.foreign_column_name; v_subquery := 'select "'||rx.foreign_table_primary_key||'" as key from '||rx.foreign_table_schema||'."'||rx.foreign_table_name||'" where "'||rx.foreign_column_name||'"in('||coalesce(p_keys, p_subquery)||') for update'; if p_foreign_keys @> ARRAY[v_foreign_key] then --raise notice 'circular recursion detected'; else p_foreign_keys := array_append(p_foreign_keys, v_foreign_key); recnum:= recnum + delete_cascade(rx.foreign_table_schema, rx.foreign_table_name, null, v_subquery, p_foreign_keys); p_foreign_keys := array_remove(p_foreign_keys, v_foreign_key); end if; end loop; begin if (coalesce(p_keys, p_subquery) <> '') then v_sql := 'delete from '||p_schema||'."'||p_table||'" where "'||v_primary_key||'"in('||coalesce(p_keys, p_subquery)||')'; --raise notice '%',v_sql; execute v_sql; get diagnostics v_rows = row_count; recnum := recnum + v_rows; end if; exception when others then recnum=0; end; return recnum; end; $$ language PLPGSQL; A: I wrote a (recursive) function to delete any row based on its primary key. I wrote this because I did not want to create my constraints as "on delete cascade". I wanted to be able to delete complex sets of data (as a DBA) but not allow my programmers to be able to cascade delete without thinking through all of the repercussions. I'm still testing out this function, so there may be bugs in it -- but please don't try it if your DB has multi column primary (and thus foreign) keys. Also, the keys all have to be able to be represented in string form, but it could be written in a way that doesn't have that restriction. I use this function VERY SPARINGLY anyway, I value my data too much to enable the cascading constraints on everything. Basically this function is passed in the schema, table name, and primary value (in string form), and it will start by finding any foreign keys on that table and makes sure data doesn't exist-- if it does, it recursively calls itsself on the found data. It uses an array of data already marked for deletion to prevent infinite loops. Please test it out and let me know how it works for you. Note: It's a little slow. I call it like so: select delete_cascade('public','my_table','1'); create or replace function delete_cascade(p_schema varchar, p_table varchar, p_key varchar, p_recursion varchar[] default null) returns integer as $$ declare rx record; rd record; v_sql varchar; v_recursion_key varchar; recnum integer; v_primary_key varchar; v_rows integer; begin recnum := 0; select ccu.column_name into v_primary_key from information_schema.table_constraints tc join information_schema.constraint_column_usage AS ccu ON ccu.constraint_name = tc.constraint_name and ccu.constraint_schema=tc.constraint_schema and tc.constraint_type='PRIMARY KEY' and tc.table_name=p_table and tc.table_schema=p_schema; for rx in ( select kcu.table_name as foreign_table_name, kcu.column_name as foreign_column_name, kcu.table_schema foreign_table_schema, kcu2.column_name as foreign_table_primary_key from information_schema.constraint_column_usage ccu join information_schema.table_constraints tc on tc.constraint_name=ccu.constraint_name and tc.constraint_catalog=ccu.constraint_catalog and ccu.constraint_schema=ccu.constraint_schema join information_schema.key_column_usage kcu on kcu.constraint_name=ccu.constraint_name and kcu.constraint_catalog=ccu.constraint_catalog and kcu.constraint_schema=ccu.constraint_schema join information_schema.table_constraints tc2 on tc2.table_name=kcu.table_name and tc2.table_schema=kcu.table_schema join information_schema.key_column_usage kcu2 on kcu2.constraint_name=tc2.constraint_name and kcu2.constraint_catalog=tc2.constraint_catalog and kcu2.constraint_schema=tc2.constraint_schema where ccu.table_name=p_table and ccu.table_schema=p_schema and TC.CONSTRAINT_TYPE='FOREIGN KEY' and tc2.constraint_type='PRIMARY KEY' ) loop v_sql := 'select '||rx.foreign_table_primary_key||' as key from '||rx.foreign_table_schema||'.'||rx.foreign_table_name||' where '||rx.foreign_column_name||'='||quote_literal(p_key)||' for update'; --raise notice '%',v_sql; --found a foreign key, now find the primary keys for any data that exists in any of those tables. for rd in execute v_sql loop v_recursion_key=rx.foreign_table_schema||'.'||rx.foreign_table_name||'.'||rx.foreign_column_name||'='||rd.key; if (v_recursion_key = any (p_recursion)) then --raise notice 'Avoiding infinite loop'; else --raise notice 'Recursing to %,%',rx.foreign_table_name, rd.key; recnum:= recnum +delete_cascade(rx.foreign_table_schema::varchar, rx.foreign_table_name::varchar, rd.key::varchar, p_recursion||v_recursion_key); end if; end loop; end loop; begin --actually delete original record. v_sql := 'delete from '||p_schema||'.'||p_table||' where '||v_primary_key||'='||quote_literal(p_key); execute v_sql; get diagnostics v_rows= row_count; --raise notice 'Deleting %.% %=%',p_schema,p_table,v_primary_key,p_key; recnum:= recnum +v_rows; exception when others then recnum=0; end; return recnum; end; $$ language PLPGSQL; A: No. To do it just once you would simply write the delete statement for the table you want to cascade. DELETE FROM some_child_table WHERE some_fk_field IN (SELECT some_id FROM some_Table); DELETE FROM some_table; A: If I understand correctly, you should be able to do what you want by dropping the foreign key constraint, adding a new one (which will cascade), doing your stuff, and recreating the restricting foreign key constraint. For example: testing=# create table a (id integer primary key); NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "a_pkey" for table "a" CREATE TABLE testing=# create table b (id integer references a); CREATE TABLE -- put some data in the table testing=# insert into a values(1); INSERT 0 1 testing=# insert into a values(2); INSERT 0 1 testing=# insert into b values(2); INSERT 0 1 testing=# insert into b values(1); INSERT 0 1 -- restricting works testing=# delete from a where id=1; ERROR: update or delete on table "a" violates foreign key constraint "b_id_fkey" on table "b" DETAIL: Key (id)=(1) is still referenced from table "b". -- find the name of the constraint testing=# \d b; Table "public.b" Column | Type | Modifiers --------+---------+----------- id | integer | Foreign-key constraints: "b_id_fkey" FOREIGN KEY (id) REFERENCES a(id) -- drop the constraint testing=# alter table b drop constraint b_a_id_fkey; ALTER TABLE -- create a cascading one testing=# alter table b add FOREIGN KEY (id) references a(id) on delete cascade; ALTER TABLE testing=# delete from a where id=1; DELETE 1 testing=# select * from a; id ---- 2 (1 row) testing=# select * from b; id ---- 2 (1 row) -- it works, do your stuff. -- [stuff] -- recreate the previous state testing=# \d b; Table "public.b" Column | Type | Modifiers --------+---------+----------- id | integer | Foreign-key constraints: "b_id_fkey" FOREIGN KEY (id) REFERENCES a(id) ON DELETE CASCADE testing=# alter table b drop constraint b_id_fkey; ALTER TABLE testing=# alter table b add FOREIGN KEY (id) references a(id) on delete restrict; ALTER TABLE Of course, you should abstract stuff like that into a procedure, for the sake of your mental health. A: Yeah, as others have said, there's no convenient 'DELETE FROM my_table ... CASCADE' (or equivalent). To delete non-cascading foreign key-protected child records and their referenced ancestors, your options include: * *Perform all the deletions explicitly, one query at a time, starting with child tables (though this won't fly if you've got circular references); or *Perform all the deletions explicitly in a single (potentially massive) query; or *Assuming your non-cascading foreign key constraints were created as 'ON DELETE NO ACTION DEFERRABLE', perform all the deletions explicitly in a single transaction; or *Temporarily drop the 'no action' and 'restrict' foreign key constraints in the graph, recreate them as CASCADE, delete the offending ancestors, drop the foreign key constraints again, and finally recreate them as they were originally (thus temporarily weakening the integrity of your data); or *Something probably equally fun. It's on purpose that circumventing foreign key constraints isn't made convenient, I assume; but I do understand why in particular circumstances you'd want to do it. If it's something you'll be doing with some frequency, and if you're willing to flout the wisdom of DBAs everywhere, you may want to automate it with a procedure. I came here a few months ago looking for an answer to the "CASCADE DELETE just once" question (originally asked over a decade ago!). I got some mileage out of Joe Love's clever solution (and Thomas C. G. de Vilhena's variant), but in the end my use case had particular requirements (handling of intra-table circular references, for one) that forced me to take a different approach. That approach ultimately became recursively_delete (PG 10.10). I've been using recursively_delete in production for a while, now, and finally feel (warily) confident enough to make it available to others who might wind up here looking for ideas. As with Joe Love's solution, it allows you to delete entire graphs of data as if all foreign key constraints in your database were momentarily set to CASCADE, but offers a couple additional features: * *Provides an ASCII preview of the deletion target and its graph of dependents. *Performs deletion in a single query using recursive CTEs. *Handles circular dependencies, intra- and inter-table. *Handles composite keys. *Skips 'set default' and 'set null' constraints. A: The delete with the cascade option only applied to tables with foreign keys defined. If you do a delete, and it says you cannot because it would violate the foreign key constraint, the cascade will cause it to delete the offending rows. If you want to delete associated rows in this way, you will need to define the foreign keys first. Also, remember that unless you explicitly instruct it to begin a transaction, or you change the defaults, it will do an auto-commit, which could be very time consuming to clean up. A: When you creating new table, you can add some constrains like UNIQUE, or NOT NULL, also you can show SQL which action it should do when you trying to DELETE rows, which has REFERENCES on another tables CREATE TABLE company ( id SERIAL PRIMARY KEY, name VARCHAR(128), year DATE); CREATE TABLE employee ( id SERIAL PRIMARY KEY, first_name VARCHAR(128) NOT NULL, last_name VARCHAR(128) NOT NULL, company_id INT REFERENCES company(id) ON DELETE CASCADE, salary INT, UNIQUE (first_name, last_name)); So after that you can just DELETE any rows which you need, for example: DELETE FROM company WHERE id = 2;
{ "language": "en", "url": "https://stackoverflow.com/questions/129265", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "295" }
Q: Why no static methods in Interfaces, but static fields and inner classes OK? [pre-Java8] There have been a few questions asked here about why you can't define static methods within interfaces, but none of them address a basic inconsistency: why can you define static fields and static inner types within an interface, but not static methods? Static inner types perhaps aren't a fair comparison, since that's just syntactic sugar that generates a new class, but why fields but not methods? An argument against static methods within interfaces is that it breaks the virtual table resolution strategy used by the JVM, but shouldn't that apply equally to static fields, i.e. the compiler can just inline it? Consistency is what I desire, and Java should have either supported no statics of any form within an interface, or it should be consistent and allow them. A: The purpose of interfaces is to define a contract without providing an implementation. Therefore, you can't have static methods, because they'd have to have an implementation already in the interface since you can't override static methods. As to fields, only static final fields are allowed, which are, essentially, constants (in 1.5+ you can also have enums in interfaces). The constants are there to help define the interface without magic numbers. BTW, there's no need to explicitly specify static final modifiers for fields in interfaces, because only static final fields are allowed. A: This is an old thread , but this is something very important question for all. Since i noticed this today only so i am trying to explain it in cleaner way: The main purpose of interface is to provide something that is unimplementable, so if they provide static methods to be allowed then you can call that method using interfaceName.staticMethodName(), but this is unimplemented method and contains nothing. So it is useless to allow static methods. Therefore they do not provide this at all. static fields are allowed because fields are not implementable, by implementable i mean you can not perform any logical operation in field, you can do operation on field. So you are not changing behavior of field that is why they are allowed. Inner classes are allowed Inner classes are allowed because after compilation different class file of the Inner class is created say InterfaceName$InnerClassName.class , so basically you are providing implementation in different entity all together but not in interface. So implementation in Inner classes is provided. I hope this would help. A: An official proposal has been made to allow static methods in interfaces in Java 7. This proposal is being made under Project Coin. My personal opinion is that it's a great idea. There is no technical difficulty in implementation, and it's a very logical, reasonable thing to do. There are several proposals in Project Coin that I hope will never become part of the Java language, but this is one that could clean up a lot of APIs. For example, the Collections class has static methods for manipulating any List implementation; those could be included in the List interface. Update: In the Java Posse Podcast #234, Joe D'arcy mentioned the proposal briefly, saying that it was "complex" and probably would not make it in under Project Coin. Update: While they didn't make it into Project Coin for Java 7, Java 8 does support static functions in interfaces. A: I'm going to go with my pet theory with this one, which is that the lack of consistency in this case is a matter of convenience rather than design or necessity, since I've heard no convincing argument that it was either of those two. Static fields are there (a) because they were there in JDK 1.0, and many dodgy decisions were made in JDK 1.0, and (b) static final fields in interfaces are the closest thing java had to constants at the time. Static inner classes in interfaces were allowed because that's pure syntactic sugar - the inner class isn't actually anything to do with the parent class. So static methods aren't allowed simply because there's no compelling reason to do so; consistency isn't sufficiently compelling to change the status quo. Of course, this could be permitted in future JLS versions without breaking anything. A: Prior to Java 5, a common usage for static fields was: interface HtmlConstants { static String OPEN = "<"; static String SLASH_OPEN = "</"; static String CLOSE = ">"; static String SLASH_CLOSE = " />"; static String HTML = "html"; static String BODY = "body"; ... } public class HtmlBuilder implements HtmlConstants { // implements ?!? public String buildHtml() { StringBuffer sb = new StringBuffer(); sb.append(OPEN).append(HTML).append(CLOSE); sb.append(OPEN).append(BODY).append(CLOSE); ... sb.append(SLASH_OPEN).append(BODY).append(CLOSE); sb.append(SLASH_OPEN).append(HTML).append(CLOSE); return sb.toString(); } } This meant HtmlBuilder would not have to qualify each constant, so it could use OPEN instead of HtmlConstants.OPEN Using implements in this way is ultimately confusing. Now with Java 5, we have the import static syntax to achieve the same effect: private final class HtmlConstants { ... private HtmlConstants() { /* empty */ } } import static HtmlConstants.*; public class HtmlBuilder { // no longer uses implements ... } A: Actually sometimes there are reasons someone can benefit from static methods. They can be used as factory methods for the classes that implement the interface. For example that's the reason we have Collection interface and the Collections class in openjdk now. So there are workarounds as always - provide another class with a private constructor which will serve as a "namespace" for the static methods. A: There is no real reason for not having static methods in interfaces except: the Java language designers did not want it like that. From a technical standpoint it would make sense to allow them. After all an abstract class can have them as well. I assume but did not test it, that you can "hand craft" byte code where the interface has a static method and it should imho work with no problems to call the method and/or to use the interface as usually. A: I often wonder why static methods at all? They do have their uses, but package/namespace level methods would probably cover 80 of what static methods are used for. A: There is never a point to declaring a static method in an interface. They cannot be executed by the normal call MyInterface.staticMethod(). (EDIT:Since that last sentence confused some people, calling MyClass.staticMethod() executes precisely the implementation of staticMethod on MyClass, which if MyClass is an interface cannot exist!) If you call them by specifying the implementing class MyImplementor.staticMethod() then you must know the actual class, so it is irrelevant whether the interface contains it or not. More importantly, static methods are never overridden, and if you try to do: MyInterface var = new MyImplementingClass(); var.staticMethod(); the rules for static say that the method defined in the declared type of var must be executed. Since this is an interface, this is impossible. You can of course always remove the static keyword from the method. Everything will work fine. You may have to suppress some warnings if it is called from an instance method. To answer some of the comments below, the reason you can't execute "result=MyInterface.staticMethod()" is that it would have to execute the version of the method defined in MyInterface. But there can't be a version defined in MyInterface, because it's an interface. It doesn't have code by definition. A: Two main reasons spring to mind: * *Static methods in Java cannot be overridden by subclasses, and this is a much bigger deal for methods than static fields. In practice, I've never even wanted to override a field in a subclass, but I override methods all the time. So having static methods prevents a class implementing the interface from supplying its own implementation of that method, which largely defeats the purpose of using an interface. *Interfaces aren't supposed to have code; that's what abstract classes are for. The whole point of an interface is to let you talk about possibly-unrelated objects which happen to all have a certain set of methods. Actually providing an implementation of those methods is outside the bounds of what interfaces are intended to be. A: Static methods are tied to a class. In Java, an interface is not technically a class, it is a type, but not a class (hence, the keyword implements, and interfaces do not extend Object). Because interfaces are not classes, they cannot have static methods, because there is no actual class to attach to. You may call InterfaceName.class to get the Class Object corresponding to the interface, but the Class class specifically states that it represents classes and interfaces in a Java application. However, the interface itself is not treated as a class, and hence you cannot attach a static method. A: Only static final fields may be declared in an interface (much like methods, which are public even if you don't include the "public" keyword, static fields are "final" with or without the keyword). These are only values, and will be copied literally wherever they are used at compile time, so you never actually "call" static fields at runtime. Having a static method would not have the same semantics, since it would involve calling an interface without an implementation, which Java does not allow. A: The reason is that all methods defined in an interface are abstract whether or not you explicitly declare that modifier. An abstract static method is not an allowable combination of modifiers since static methods are not able to be overridden. As to why interfaces allow static fields. I have a feeling that should be considered a "feature". The only possibility I can think of would be to group constants that implementations of the interface would be interested in. I agree that consistency would have been a better approach. No static members should be allowed in an interface. A: I believe that static methods can be accessed without creating an object and the interface does not allow creating an object as to restrict the programmers from using the interface methods directly rather than from its implemented class. But if you define a static method in an interface, you can access it directly without its implementation. Thus static methods are not allowed in interfaces. I don't think that consistency should be a concern. A: Java 1.8 interface static method is visible to interface methods only, if we remove the methodSta1() method from the InterfaceExample class, we won’t be able to use it for the InterfaceExample object. However like other static methods, we can use interface static methods using class name. For example, a valid statement will be: exp1.methodSta1(); So after looking below example we can say : 1) Java interface static method is part of interface, we can’t use it for implementation class objects. 2) Java interface static methods are good for providing utility methods, for example null check, collection sorting ,log etc. 3) Java interface static method helps us in providing security by not allowing implementation classes (InterfaceExample) to override them. 4) We can’t define interface static method for Object class methods, we will get compiler error as “This static method cannot hide the instance method from Object”. This is because it’s not allowed in java, since Object is the base class for all the classes and we can’t have one class level static method and another instance method with same signature. 5) We can use java interface static methods to remove utility classes such as Collections and move all of it’s static methods to the corresponding interface, that would be easy to find and use. public class InterfaceExample implements exp1 { @Override public void method() { System.out.println("From method()"); } public static void main(String[] args) { new InterfaceExample().method2(); InterfaceExample.methodSta2(); // <--------------------------- would not compile // methodSta1(); // <--------------------------- would not compile exp1.methodSta1(); } static void methodSta2() { // <-- it compile successfully but it can't be overridden in child classes System.out.println("========= InterfaceExample :: from methodSta2() ======"); } } interface exp1 { void method(); //protected void method1(); // <-- error //private void method2(); // <-- error //static void methodSta1(); // <-- error it require body in java 1.8 static void methodSta1() { // <-- it compile successfully but it can't be overridden in child classes System.out.println("========= exp1:: from methodSta1() ======"); } static void methodSta2() { // <-- it compile successfully but it can't be overridden in child classes System.out.println("========= exp1:: from methodSta2() ======"); } default void method2() { System.out.println("--- exp1:: from method2() ---");} //synchronized default void method3() { System.out.println("---");} // <-- Illegal modifier for the interface method method3; only public, abstract, default, static // and strictfp are permitted //final default void method3() { System.out.println("---");} // <-- error }
{ "language": "en", "url": "https://stackoverflow.com/questions/129267", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "92" }
Q: How do you determine whether or not a given Type (System.Type) inherits from a specific base class (in .Net)? This is likely going to be an easy answer and I'm just missing something, but here goes...If I have a Type, (that is, an actual System.Type...not an instance) how do I tell if it inherits from another specific base type? A: Use the IsSubclassOf method of the System.Type class. A: One thing to clarify between Type.IsSubTypeOf() and Type.IsAssignableFrom(): * *IsSubType() will return true only if the given type is derived from the specified type. It will return false if the given type IS the specified type. *IsAssignableFrom() will return true if the given type is either the specified type or derived from the specified type. So if you are using these to compare BaseClass and DerivedClass (which inherits from BaseClass) then: BaseClassInstance.GetType.IsSubTypeOf(GetType(BaseClass)) = FALSE BaseClassInstance.GetType.IsAssignableFrom(GetType(BaseClass)) = TRUE DerivedClassInstance.GetType.IsSubTypeOf(GetType(BaseClass)) = TRUE DerivedClassInstance.GetType.IsAssignableFrom(GetType(BaseClass)) = TRUE A: EDIT: Note that the above solution will fail if the base type you are looking for is an interface. The following solution will work for any type of inheritance, be it class or interface. // Returns true if "type" inherits from "baseType" public static bool Inherits(Type type, Type baseType) { return baseType.IsAssignableFrom(type) } (Semi)Helpful extract from the MSDN article: true if [the argument] and the current Type represent the same type, or if the current Type is in the inheritance hierarchy of [the argument], or if the current Type is an interface that [the argument] implements, or if [the argument] is a generic type parameter and the current Type represents one of the constraints of [the argument]. false if none of these conditions are true, or if [the argument] is a null reference (Nothing in Visual Basic).
{ "language": "en", "url": "https://stackoverflow.com/questions/129277", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: MVC model design / inheritance Forgive the vague title, I wasn't sure how to describe it. If you have a generic model "Archive", how do you show different views/forms based on a user selected 'type'? For example, the user creates a new "Archive", then gets the choice of video, book, audio etc. From there they get different forms based on the archive type. Or would it be better to split them into different models - Video, Book, Audio? Or can models inherit (like Video extends Archive). I guess this is basic OOP / classes, but have no idea how to apply that here. Examples from any MVC framework are welcome! A: Your models Video, Book and Audio can inherit from Archive. And each model will have a controller. http://yourserver/Books/Edit/11 You will have to get your user to pick the type of archive they want before you create the corresponding model. EDIT (in response to comment) In ASP.NET MVC your model will be a class. public class Video : Archive { public int Id {get;set} public string Name {get;set;} ... } You will also have a controller public class VideoController : Controller { public object Edit(int id) { Video myVideo = GetVideo(id); return View("Edit", myVideo); } ... } And you will have a view in the Views directory for example, the page which contains public class Edit : View<Video> { ... } So you can call this if you had a URL which was http://localhost/Video/Edit/11 This was all done from memory, so there may be some mistakes, but the take-home message is that you specify the inheritance at the model. The model is just a class. In your case you want to inherit from Archive. Once you've done that the model is pass around as normal. A: To actually show a different view should be easy in any MVC framework. For example, in Microsoft ASP.NET MVC you would not just return a view from a controller like the following: return View(); but would actually state the name of the view as a parameter: return View("VideoArchive"); which would then show the view from Views/Archive/VideoArchive.aspx A: Seems like you would not want to have the type inherit from Archive. "Always favor encapsulation/containment over inheritance". Why not create a class called Archive and give it a type property. The type can use inheritance to specialize for Audio, Video, etc. It would seem that you would specialize Archive based on some other criteria. "FileSystemArchivce", "XMLArchive", "SQLArchive" and the type would not change. But the agilist in me says that this may not be necesscary at first, and you can always refactor the design later... In terms of a controller, you probably get the biggest bang for the buck by encapsulating the differences of presentation for each type in the view. So only the view changes based on the type. Likely the semantics and rules for each one are the same and you would not need to have seperate controllers for each type. The views will be different for each type as it will have different attributes. A: The Single Responsibility Principle (PDF) states that: THERE SHOULD NEVER BE MORE THAN ONE REASON FOR A CLASS TO CHANGE. Your Archive class violates this principle by handling multiple different types of archives. For example, if you need to update the video archive, you are also modifying the class that handles book and audio archives. The appropriate way to handle this is to create separate classes for each different type of archive. These types should implement a common interface (or inherit a common base class) so that they can be treated interchangeably (polymorphically) by code that only cares about Archives, not specific archive types. Once you have that class hierarchy in place, you just need a single controller and view for each model class. For bonus points, the Single Responsibility Principle can even justify using a factory method or abstract factory for creating your model, view and controller objects (rather than new-ing them up inline). After all, creating an object and using that object are different responsibilities, which might need to be changed for different reasons. A: Seems to me that one solid point in favor of MVC is that you may not need to customize the model (or the controller - of which you want only one) if all the user needs is a different view. Multiple models would appear only if the storage (persistence) architecture dictated a need for it. Some feature like data access objects (DAO) would potentially appear as another tier, between the controller and the model,should you require multiple models. Take a look at the Apache Struts project for examples. As stated in Struts for Newbies, "To use Struts well, it's important to have a good grasp of the fundamentals. Start by reviewing the Key Technologies primer, and studying any unfamiliar topics." For another resource, see Web-Tier Application Framework Design (Sun J2EE Blueprints)
{ "language": "en", "url": "https://stackoverflow.com/questions/129283", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Can attributes be added dynamically in C#? Is it possible to add attributes at runtime or to change the value of an attribute at runtime? A: This really depends on what exactly you're trying to accomplish. The System.ComponentModel.TypeDescriptor stuff can be used to add attributes to types, properties and object instances, and it has the limitation that you have to use it to retrieve those properties as well. If you're writing the code that consumes those attributes, and you can live within those limitations, then I'd definitely suggest it. As far as I know, the PropertyGrid control and the visual studio design surface are the only things in the BCL that consume the TypeDescriptor stuff. In fact, that's how they do about half the things they really need to do. A: Attributes are static metadata. Assemblies, modules, types, members, parameters, and return values aren't first-class objects in C# (e.g., the System.Type class is merely a reflected representation of a type). You can get an instance of an attribute for a type and change the properties if they're writable but that won't affect the attribute as it is applied to the type. A: No, it's not. Attributes are meta-data and stored in binary-form in the compiled assembly (that's also why you can only use simple types in them). A: I don't believe so. Even if I'm wrong, the best you can hope for is adding them to an entire Type, never an instance of a Type. A: If you need something to be able to added dynamically, c# attributes aren't the way. Look into storing the data in xml. I recently did a project that i started w/ attributes, but eventually moved to serialization w/ xml. A: Why do you need to? Attributes give extra information for reflection, but if you externally know which properties you want you don't need them. You could store meta data externally relatively easily in a database or resource file. A: You can't. One workaround might be to generate a derived class at runtime and adding the attribute, although this is probably bit of an overkill. A: Well, just to be different, I found an article that references using Reflection.Emit to do so. Here's the link: http://www.codeproject.com/KB/cs/dotnetattributes.aspx , you will also want to look into some of the comments at the bottom of the article, because possible approaches are discussed. A: Like mentionned in a comment below by Deczaloth, I think that metadata is fixed at compile time. I achieve it by creating a dynamic object where I override GetType() or use GetCustomType() and writing my own type. Using this then you could... I tried very hard with System.ComponentModel.TypeDescriptor without success. That does not means it can't work but I would like to see code for that. In counter part, I wanted to change some Attribute values. I did 2 functions which work fine for that purpose. // ************************************************************************ public static void SetObjectPropertyDescription(this Type typeOfObject, string propertyName, string description) { PropertyDescriptor pd = TypeDescriptor.GetProperties(typeOfObject)[propertyName]; var att = pd.Attributes[typeof(DescriptionAttribute)] as DescriptionAttribute; if (att != null) { var fieldDescription = att.GetType().GetField("description", BindingFlags.NonPublic | BindingFlags.Instance); if (fieldDescription != null) { fieldDescription.SetValue(att, description); } } } // ************************************************************************ public static void SetPropertyAttributReadOnly(this Type typeOfObject, string propertyName, bool isReadOnly) { PropertyDescriptor pd = TypeDescriptor.GetProperties(typeOfObject)[propertyName]; var att = pd.Attributes[typeof(ReadOnlyAttribute)] as ReadOnlyAttribute; if (att != null) { var fieldDescription = att.GetType().GetField("isReadOnly", BindingFlags.NonPublic | BindingFlags.Instance); if (fieldDescription != null) { fieldDescription.SetValue(att, isReadOnly); } } } A: When faced with this situation, yet another solution might be questioning you code design and search for a more object-oriented way. For me, struggling with unpleasant reflection work arounds is the last resort. And my first reaction to this situation would be re-designing the code. Think of the following code, which tries to solve the problem that you have to add an attribute to a third-party class you are using. class Employee {} // This one is third-party. And you have code like var specialEmployee = new Employee(); // Here you need an employee with a special behaviour and want to add an attribute to the employee but you cannot. The solution might be extracting a class inheriting from the Employee class and decorating it with your attribute: [SpecialAttribute] class SpecialEmployee : Employee { } When you create an instance of this new class var specialEmployee = new SpecialEmployee(); you can distinguish this specialEmployee object from other employee objects. If appropriate, you may want to make this SpecialEmployee a private nested class.
{ "language": "en", "url": "https://stackoverflow.com/questions/129285", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "149" }
Q: Automatically resizing X11 display when connecting an external monitor I have a laptop running Ubuntu to which I connect an external monitor when I'm at the office. Usually this requires me to run xrandr --auto in order for the laptop to re-size the display to match the external monitor. It would be nice if this could be done automatically, either triggered when the monitor is connected, but it would be enough to actually run xrandr --auto when the laptop wakes up from suspend/hibernate. I created a script /etc/pm/sleep.d/00xrandr.sh containing the line xrandr --auto but this fails since the script does not have access to the X display. Any ideas? A: I guees that the problem is that the script is being run as root, with no access to your xauth data. Depending on your setup, something like this could work: xauth merge /home/your_username/.Xauthority export DISPLAY=:0.0 xrandr --auto You could use something more clever to find out which user you need to extract xauth data from if you need to. A: Have you tried to set the DISPLAY variable in the script correctly and granted access for other users to your DISPLAY with xhost + localhost? Don't know if that helps, but it's worth a try.
{ "language": "en", "url": "https://stackoverflow.com/questions/129297", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: RDLC SubReports Exporting to Excel Are Ignored I have a RDLC report which has a table, calling a subreport N times. This works perfectly in the control viewer and when I export to PDF. Yet when I export to Excel, I get the following error: Subreports within table/matrix cells are ignored. Does anyone know why this occurs only within the Excel export? And is there a workaround? A: See MSDN forum link below...looks like this is not supported in 2000/2005, but there also seem to be some kludgey workarounds (nested lists). A Microsoft moderator claims that reporting services 2008 will export everything. http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1520229&SiteID=1 A: Drop the table control from rdlc and Put your main report data into matrix control and put your sub reports below to maix control. Run your report and export again.It will be solved.
{ "language": "en", "url": "https://stackoverflow.com/questions/129301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to write the content of one stream into another stream in .net? I often run into the problem that I have one stream full of data and want to write everything of it into another stream. All code-examples out there use a buffer in form of a byte-array. Is there a more elegant way to this? If not, what's the ideal size of the buffer. Which factors make up this value? A: I'm not sure if you can directly pipe one stream to another in .NET, but here's a method to do it with an intermediate byte buffer. The size of the buffer is arbitrary. The most efficient size will depend mostly on how much data you're transferring. static void CopyStream(Stream input, Stream output){ byte[] buffer = new byte[0x1000]; int read; while ((read = input.Read(buffer, 0, buffer.Length)) > 0) output.Write(buffer, 0, read); } A: In .NET 4.0 we finally got a Stream.CopyTo method! Yay! A: BufferedStream.CopyTo(Stream) A: Read data in FileStream into a generic Stream will probably have some directions to go in A: I'm not aware of a more elegant way, than using a buffer. But the size of a buffer can make a difference. Remember the issues about Vista's File Copy? It's reason was (basically) changing the buffer size. The changes are explained in this blogpost. You can learn the main factors from that post. However, this only applies for file copying. In applications probably you do a lot of memory copies, so in that case, the 4KB could be the best buffer size, as recommended by the .NET documentation. A: Regarding the ideal buffer size: "When using the Read method, it is more efficient to use a buffer that is the same size as the internal buffer of the stream, where the internal buffer is set to your desired block size, and to always read less than the block size. If the size of the internal buffer was unspecified when the stream was constructed, its default size is 4 kilobytes (4096 bytes)." Any stream-reading process will use Read(char buffer[], int index, count), which is the method this quote refers to. http://msdn.microsoft.com/en-us/library/9kstw824.aspx (Under "Remarks"). A: As some people have suggested, CopyTo and CopyToAsync should do the job. Here is an example of a TCP server that listens for external connections on port 30303 and pipes them with local port 8085 (written in .NET 5). Most streams should work the same, just pay attention if they are bi-directional or single-direction. using System.Net; using System.Net.Sockets; using System.Threading.Tasks; namespace ConsoleApp1 { class Program { static async Task Main(string[] args) { var externalConnectionListener = new TcpListener(IPAddress.Any, 30303); externalConnectionListener.Start(); while (true) { var externalConnection = await externalConnectionListener.AcceptTcpClientAsync().ConfigureAwait(false); _ = Task.Factory.StartNew(async () => { using NetworkStream externalConnectionStream = externalConnection.GetStream(); using TcpClient internalConnection = new TcpClient("127.0.0.1", 8085); using NetworkStream internalConnectionStream = internalConnection.GetStream(); await Task.WhenAny( externalConnectionStream.CopyToAsync(internalConnectionStream), internalConnectionStream.CopyToAsync(externalConnectionStream)).ConfigureAwait(false); }); } } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/129305", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "41" }
Q: What browser features/plugins for opera, IE, firefox, chrome, safari, etc. do you use for browser compatibility testing? I use the Nightly Tester Tools for Firefox and Fiddler for IE. What do you use? A: Web Developer toolbar for Firefox, Visual Studio JIT debugger for IE, and Chrome's Resource Inspector. We don't use Opera for debugging due to the aforementioned tools, but we do take a look at our stuff to make sure it looks correct in Opera to be on the safe side. A: A nice question. I have never think about it before. We test our website with a large range of custom configure browser. This are the browser of the different developers. The inhomogeneous is program. Problems we had only with 2 plugins: Adblock and FlashBlock I would also test NoScript. A: Opera has Dragonfly (Tools → Advanced → Developer Tools). I like mouse over DOM inspector – it's simple, fast and cross-browser. A: In Opera View > Style And all those various little selections (such as Outline, Class and Id, etc) have been great help in figuring out the floats and border constraints of DIVs and SPANs, the class and ID names of elements and makes switching back to tweak the code so much faster. And that shortcut (CTRL+SHIFT+ALT+U in v9) to load the current page into the W3C validator is nifty too.
{ "language": "en", "url": "https://stackoverflow.com/questions/129310", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Windows (Vista): Set process-priority on a program shortcut Is there any way to launch a program with a shortcut, that sets the process-priority of that program? iTunes is dragging my system to it's knees, but when I set the process-priority to "low", somehow, like magic, Windows gets back to it's normal responsive self :) A: You learn something new every day. My answer was wrong, but since it was marked accepted I'm editing to be right. Change your short cut to point to: start /BELOWNORMAL iTunes.exe Instead of just iTunes.exe
{ "language": "en", "url": "https://stackoverflow.com/questions/129312", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do you handle attachments in your web application? Due to a lack of response to my original question, probably due to poor wording on my part. Since then, I have thought about my original question and decided to reword it, hopefully for the better! :) We create custom business software for our customers, and quite often they want attachments to be added to certain business entities. For example, they want to attach a Word document to a customer, or an image to a job. I'm curious as to how other are handling the following: * *How the user attaches documents? Single attachment? Batch attachment? *How you display the attached documents? Simple list? Detailed list? *And the killer question, how the user then edits attached documents? Is this even possible in a web environment? Granted the user can just view the attachment. *Is there a good control library to help manage this process? Our current development environment is ASP.NET and C#, but I don't think this is a pretty agnostic question when it comes to development tools, save for the fact I need to work in a web environment. It seems we always run into problems with the customer and working with attachments in a web environment so I am looking for some successes that other programmers have had with their user base on how best to interact with attachments. A: * *Start with one file upload control ("Browse button"), and use JavaScript to dynamically add more upload controls if they want to attach multiple files in a single batch. *Display them in a simple list format (Filename, type, size, date), but provide full details somewhere else if they want them. *If they want to edit the files, they have to download them, then re-upload them. Hence, you need a way that they can say "this attachment overrides that old attachment". *I'm not familiar with C# and ASP.NET, so I can't recommend any libraries that will help. A: http://developer.yahoo.com/yui/uploader/
{ "language": "en", "url": "https://stackoverflow.com/questions/129328", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Optimistic vs. Pessimistic locking I understand the differences between optimistic and pessimistic locking. Now, could someone explain to me when I would use either one in general? And does the answer to this question change depending on whether or not I'm using a stored procedure to perform the query? But just to check, optimistic means "don't lock the table while reading" and pessimistic means "lock the table while reading." A: One use case for optimistic locking is to have your application use the database to allow one of your threads / hosts to 'claim' a task. This is a technique that has come in handy for me on a regular basis. The best example I can think of is for a task queue implemented using a database, with multiple threads claiming tasks concurrently. If a task has status 'Available', 'Claimed', 'Completed', a db query can say something like "Set status='Claimed' where status='Available'. If multiple threads try to change the status in this way, all but the first thread will fail because of dirty data. Note that this is a use case involving only optimistic locking. So as an alternative to saying "Optimistic locking is used when you don't expect many collisions", it can also be used where you expect collisions but want exactly one transaction to succeed. A: Lot of good things have been said above about optimistic and pessimistic locking. One important point to consider is as follows: When using optimistic locking, we need to cautious of the fact that how will application recover from these failures. Specially in asynchronous message driven architectures, this can lead of out of order message processing or lost updates. Failures scenarios need to be thought through. A: Let's say in an ecommerce app, a user wants to place an order. This code will get executed by multiple threads. In pessimistic locking, when we get the data from the DB, we lock it so no other thread can modify it. We process the data, update the data, and then commit the data. After that, we release the lock. Locking duration is long here, we have locked the database record from the beginning till committing. In optimistic locking, we get the data and process the data without locking. So multiple threads can execute the code so far concurrently. This will speed up. While we update, we lock the data. We have to verify that no other thread updated that record. For example, If we had 100 items in inventory and we have to update it to 99 (because your code might be quantity=queantity-1) but if another thread already used 1 it should be 98. We had race condition here. In this case, we restart the thread so we execute the same code from the beginning. But this is an expensive operation, you already came to end but then restart. if we had a few race conditions, that would not be a big deal, If the race condition was high, there would be a lot of threads to restart. We might run in a loop. In the race condition is high, we should be using `pessimistic locking A: When dealing with conflicts, you have two options: * *You can try to avoid the conflict, and that's what Pessimistic Locking does. *Or, you could allow the conflict to occur, but you need to detect it upon committing your transactions, and that's what Optimistic Locking does. Now, let's consider the following Lost Update anomaly: The Lost Update anomaly can happen in the Read Committed isolation level. In the diagram above we can see that Alice believes she can withdraw 40 from her account but does not realize that Bob has just changed the account balance, and now there are only 20 left in this account. Pessimistic Locking Pessimistic locking achieves this goal by taking a shared or read lock on the account so Bob is prevented from changing the account. In the diagram above, both Alice and Bob will acquire a read lock on the account table row that both users have read. The database acquires these locks on SQL Server when using Repeatable Read or Serializable. Because both Alice and Bob have read the account with the PK value of 1, neither of them can change it until one user releases the read lock. This is because a write operation requires a write/exclusive lock acquisition, and shared/read locks prevent write/exclusive locks. Only after Alice has committed her transaction and the read lock was released on the account row, Bob UPDATE will resume and apply the change. Until Alice releases the read lock, Bob's UPDATE blocks. Optimistic Locking Optimistic Locking allows the conflict to occur but detects it upon applying Alice's UPDATE as the version has changed. This time, we have an additional version column. The version column is incremented every time an UPDATE or DELETE is executed, and it is also used in the WHERE clause of the UPDATE and DELETE statements. For this to work, we need to issue the SELECT and read the current version prior to executing the UPDATE or DELETE, as otherwise, we would not know what version value to pass to the WHERE clause or to increment. Application-level transactions Relational database systems have emerged in the late 70's early 80's when a client would, typically, connect to a mainframe via a terminal. That's why we still see database systems define terms such as SESSION setting. Nowadays, over the Internet, we no longer execute reads and writes in the context of the same database transaction, and ACID is no longer sufficient. For instance, consider the following use case: Without optimistic locking, there is no way this Lost Update would have been caught even if the database transactions used Serializable. This is because reads and writes are executed in separate HTTP requests, hence on different database transactions. So, optimistic locking can help you prevent Lost Updates even when using application-level transactions that incorporate the user-think time as well. Conclusion Optimistic locking is a very useful technique, and it works just fine even when using less-strict isolation levels, like Read Committed, or when reads and writes are executed in subsequent database transactions. The downside of optimistic locking is that a rollback will be triggered by the data access framework upon catching an OptimisticLockException, therefore losing all the work we've done previously by the currently executing transaction. The more contention, the more conflicts, and the greater the chance of aborting transactions. Rollbacks can be costly for the database system as it needs to revert all current pending changes which might involve both table rows and index records. For this reason, pessimistic locking might be more suitable when conflicts happen frequently, as it reduces the chance of rolling back transactions. A: Optimistic locking means exclusive lock is not used when reading a row so lost update or write skew is not prevented. So, use optimistic locking: * *If lost update or write skew doesn't occur. *Or, if there are no problems even if lost update or write skew occurs. Pessimistic locking means exclusive lock is used when reading a row so lost update or write skew is prevented. So, use pessimistic locking: * *If lost update or write skew occurs. *Or if there are some problems if lost update or write skew occurs. In MySQL and PostgreSQL, you can use exclusive lock with SELECT FOR UPDATE. You can check my answer of the lost update and write skew examples with optimistic locking(without SELECT FOR UPDATE) and pessimistic locking(with SELECT FOR UPDATE) in MySQL. A: I would think of one more case when pessimistic locking would be a better choice. For optimistic locking every participant in data modification must agree in using this kind of locking. But if someone modifies the data without taking care about the version column, this will spoil the whole idea of the optimistic locking. A: Optimistic locking is used when you don't expect many collisions. It costs less to do a normal operation but if the collision DOES occur you would pay a higher price to resolve it as the transaction is aborted. Pessimistic locking is used when a collision is anticipated. The transactions which would violate synchronization are simply blocked. To select proper locking mechanism you have to estimate the amount of reads and writes and plan accordingly. A: There are basically two most popular answers. The first one basically says Optimistic needs a three-tier architectures where you do not necessarily maintain a connection to the database for your session whereas Pessimistic Locking is when you lock the record for your exclusive use until you have finished with it. It has much better integrity than optimistic locking you need either a direct connection to the database. Another answer is optimistic (versioning) is faster because of no locking but (pessimistic) locking performs better when contention is high and it is better to prevent the work rather than discard it and start over. or Optimistic locking works best when you have rare collisions As it is put on this page. I created my answer to explain how "keep connection" is related to "low collisions". To understand which strategy is best for you, think not about the Transactions Per Second your DB has but the duration of a single transaction. Normally, you open trasnaction, performa operation and close the transaction. This is a short, classical transaction ANSI had in mind and fine to get away with locking. But, how do you implement a ticket reservation system where many clients reserve the same rooms/seats at the same time? You browse the offers, fill in the form with lots of available options and current prices. It takes a lot of time and options can become obsolete, all the prices invalid between you started to fill the form and press "I agree" button because there was no lock on the data you have accessed and somebody else, more agile, has intefered changing all the prices and you need to restart with new prices. You could lock all the options as you read them, instead. This is pessimistic scenario. You see why it sucks. Your system can be brought down by a single clown who simply starts a reservation and goes smoking. Nobody can reserve anything before he finishes. Your cash flow drops to zero. That is why, optimistic reservations are used in reality. Those who dawdle too long have to restart their reservation at higher prices. In this optimistic approach you have to record all the data that you read (as in mine Repeated Read) and come to the commit point with your version of data (I want to buy shares at the price you displayed in this quote, not current price). At this point, ANSI transaction is created, which locks the DB, checks if nothing is changed and commits/aborts your operation. IMO, this is effective emulation of MVCC, which is also associated with Optimistic CC and also assumes that your transaction restarts in case of abort, that is you will make a new reservation. A transaction here involves a human user decisions. I am far from understanding how to implement the MVCC manually but I think that long-running transactions with option of restart is the key to understanding the subject. Correct me if I am wrong anywhere. My answer was motivated by this Alex Kuznecov chapter. A: On a more practical note, when updating a distributed system, optimistic locking in the DB may be inadequate to provide the consistency needed across all parts of the distributed system. For example, in applications built on AWS, it is common to have data in both a DB (e.g. DynamoDB) and a storage (e.g. S3). If an update touches both DynamoDB and S3, an optimistic locking in DynamoDB could still leave the data in S3 inconsistent. In this type of cases, it is probably safer to use a pessimistic lock that is held in DynamoDB until the S3 update is finished. In fact, AWS provides a locking library for this purpose. A: In most cases, optimistic locking is more efficient and offers higher performance. When choosing between pessimistic and optimistic locking, consider the following: * *Pessimistic locking is useful if there are a lot of updates and relatively high chances of users trying to update data at the same time. For example, if each operation can update a large number of records at a time (the bank might add interest earnings to every account at the end of each month), and two applications are running such operations at the same time, they will have conflicts. *Pessimistic locking is also more appropriate in applications that contain small tables that are frequently updated. In the case of these so-called hotspots, conflicts are so probable that optimistic locking wastes effort in rolling back conflicting transactions. *Optimistic locking is useful if the possibility for conflicts is very low – there are many records but relatively few users, or very few updates and mostly read-type operations. A: Optimistic Locking is a strategy where you read a record, take note of a version number (other methods to do this involve dates, timestamps or checksums/hashes) and check that the version hasn't changed before you write the record back. When you write the record back you filter the update on the version to make sure it's atomic. (i.e. hasn't been updated between when you check the version and write the record to the disk) and update the version in one hit. If the record is dirty (i.e. different version to yours) you abort the transaction and the user can re-start it. This strategy is most applicable to high-volume systems and three-tier architectures where you do not necessarily maintain a connection to the database for your session. In this situation the client cannot actually maintain database locks as the connections are taken from a pool and you may not be using the same connection from one access to the next. Pessimistic Locking is when you lock the record for your exclusive use until you have finished with it. It has much better integrity than optimistic locking but requires you to be careful with your application design to avoid Deadlocks. To use pessimistic locking you need either a direct connection to the database (as would typically be the case in a two tier client server application) or an externally available transaction ID that can be used independently of the connection. In the latter case you open the transaction with the TxID and then reconnect using that ID. The DBMS maintains the locks and allows you to pick the session back up through the TxID. This is how distributed transactions using two-phase commit protocols (such as XA or COM+ Transactions) work. A: Optimistic assumes that nothing's going to change while you're reading it. Pessimistic assumes that something will and so locks it. If it's not essential that the data is perfectly read use optimistic. You might get the odd 'dirty' read - but it's far less likely to result in deadlocks and the like. Most web applications are fine with dirty reads - on the rare occasion the data doesn't exactly tally the next reload does. For exact data operations (like in many financial transactions) use pessimistic. It's essential that the data is accurately read, with no un-shown changes - the extra locking overhead is worth it. Oh, and Microsoft SQL server defaults to page locking - basically the row you're reading and a few either side. Row locking is more accurate but much slower. It's often worth setting your transactions to read-committed or no-lock to avoid deadlocks while reading. A: Optimistic locking and Pessimistic locking are two models for locking data in a database. Optimistic locking : where a record is locked only when changes are committed to the database. Pessimistic locking : where a record is locked while it is edited. Note : In both data-locking models, the lock is released after the changes are committed to the database.
{ "language": "en", "url": "https://stackoverflow.com/questions/129329", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "900" }
Q: How do I make a custom Flex component for a gap-fill exercise? The purpose of this component is to test knowledge of a student on a given subject - in the example below it would be geography. The student is given a piece of text with missing words in it. He/she has to fill in (type in this case) the missing words - hence this kind of test/exercise is called gap-fill.There could be several sentences in the exercise with multiple gaps - something that looks like this: London is the ________ and largest urban area in the _____________. An important settlement for two millennia, London's history goes back to its founding by the ___________. The component must be able to display text with 'floating' gaps within the text. These gaps would have similar behaviour to TextInput control. Once the student submits the answer the component will return the words that were typed in and these are then compared against the expected answers. The component should be able to display the text and the gaps dynamically derive all required parameters from the text. The position of the gaps could be marked by a special token - such as #10# - which would mark the position of the gap within the text and the size of the gap (number of characters). Therefore the above text could look like this before being loaded into the component: London is the #10# and largest urban area in the #15#. An important settlement for two millennia, London's history goes back to its founding by the #8#. A: You need a container that supports flow layout. It's not part of the standard Flex framework but you can find some working implementation here (part of the excellent FlexLib) and here (standalone implementation). A: I guess you could have a Canvas, and dynamically add Labels & TextInputs. The problem here would be knowing where the line-breaks go; I'm not sure how you can easily calculate the width of a text-based control from the set text but it must be possible. I wondered if there is a layout control which can do this for you but I can only see HBox & VBox which are too restrictive. Creating or finding a generic auto-wrapping layout control would be useful. A: FlowBox is the way to go. You can use horizontalGap to control spacing between text and input gaps. When it comes to ways of encoding it, I had a version in javascript you are free to look at. Rendering, Encoding of gapfill data. It was part of pet project for a generic learning activity generator. I have since moved onto Flex. I have made available samples of learning activities in Flex. You won't find there a gapfill but you will find a "type in" your answer that is close enough. All open source. Be warned however that I wrote this at the time where I was learning Flex... it was an excuse to learn diverse techniques. The code almost certainly gain to be improved. If you come up with something nifty, consider constributing to the exercist project on eduforge
{ "language": "en", "url": "https://stackoverflow.com/questions/129330", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do you redirect to a page using the POST verb? When you call RedirectToAction within a controller, it automatically redirects using an HTTP GET. How do I explicitly tell it to use an HTTP POST? I have an action that accepts both GET and POST requests, and I want to be able to RedirectToAction using POST and send it some values. Like this: this.RedirectToAction( "actionname", new RouteValueDictionary(new { someValue = 2, anotherValue = "text" }) ); I want the someValue and anotherValue values to be sent using an HTTP POST instead of a GET. Does anyone know how to do this? A: I would like to expand the answer of Jason Bunting like this ActionResult action = new SampelController().Index(2, "text"); return action; And Eli will be here for something idea on how to make it generic variable Can get all types of controller A: If you want to pass data between two actions during a redirect without include any data in the query string, put the model in the TempData object. ACTION TempData["datacontainer"] = modelData; VIEW var modelData= TempData["datacontainer"] as ModelDataType; TempData is meant to be a very short-lived instance, and you should only use it during the current and the subsequent requests only! Since TempData works this way, you need to know for sure what the next request will be, and redirecting to another view is the only time you can guarantee this. Therefore, the only scenario where using TempData will reliably work is when you are redirecting. A: For your particular example, I would just do this, since you obviously don't care about actually having the browser get the redirect anyway (by virtue of accepting the answer you have already accepted): [AcceptVerbs(HttpVerbs.Get)] public ActionResult Index() { // obviously these values might come from somewhere non-trivial return Index(2, "text"); } [AcceptVerbs(HttpVerbs.Post)] public ActionResult Index(int someValue, string anotherValue) { // would probably do something non-trivial here with the param values return View(); } That works easily and there is no funny business really going on - this allows you to maintain the fact that the second one really only accepts HTTP POST requests (except in this instance, which is under your control anyway) and you don't have to use TempData either, which is what the link you posted in your answer is suggesting. I would love to know what is "wrong" with this, if there is anything. Obviously, if you want to really have sent to the browser a redirect, this isn't going to work, but then you should ask why you would be trying to convert that regardless, since it seems odd to me. A: HTTP doesn't support redirection to a page using POST. When you redirect somewhere, the HTTP "Location" header tells the browser where to go, and the browser makes a GET request for that page. You'll probably have to just write the code for your page to accept GET requests as well as POST requests. A: try this one return Content("<form action='actionname' id='frmTest' method='post'><input type='hidden' name='someValue' value='" + someValue + "' /><input type='hidden' name='anotherValue' value='" + anotherValue + "' /></form><script>document.getElementById('frmTest').submit();</script>"); A: I have just experienced the same problem. The solution was to call the controller action like a function: return await ResendConfirmationEmail(new ResendConfirmationEmailViewModel() { Email = input.Email }); The controller action: [HttpPost] [AllowAnonymous] public async Task<IActionResult> ResendConfirmationEmail(ResendConfirmationEmailViewModel input) { ... return View("ResendConfirmationEmailConfirmed"); }
{ "language": "en", "url": "https://stackoverflow.com/questions/129335", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "152" }
Q: How to pass arguments to a constructor in an IOC-framework How can I pass arguments to a constructor in an IOC-framework? I want to do something like: (Trying to be IOC-framework agnostic ;) ) object objectToLogFor = xxx; container.Resolve<ILogging>(objectToLogFor); public class MyLogging : ILogging { public MyLogging(object objectToLogFor){} } It seems that this is not possible in StructureMap. But I would love to see someone prove me wrong. Are other frameworks more feature-rich? Or am I using the IOC-framework in the wrong way? A: In structure map you could achieve this using the With method: string objectToLogFor = "PolicyName"; ObjectFactory.With<string>(objectToLogFor).GetInstance<ILogging>(); See: http://codebetter.com/blogs/jeremy.miller/archive/2008/09/25/using-structuremap-2-5-to-inject-your-entity-objects-into-services.aspx A: For Castle Windsor: var foo = "foo"; var service = this.container.Resolve<TContract>(new { constructorArg1 = foo }); note the use of an anonymous object to specify constructor arguments. using StructureMap: var foo = "foo"; var service = container.With(foo).GetInstance<TContract>(); A: How can this be language-agnostic? This is implementation detail of the framework in question. Spring alows you to specify c'tor args as a list of values/references, if that's your thing. It's not very readable, though, compared to property injection. Some people get hot under the collar about this, and insist that c'tor injection is the only thread-safe approach in java. Technically they're correct, but in practice it tends not to matter. A: It should not be a very common need, but sometimes it is a valid one. Ninject, which is lighter than StructureMap, allows you to pass parameters when retrieving transient objects from the context. Spring.NET too. Most of the time, objects declared in an IoC container aren't transient, and accept others non-transient objects through constructors/properties/methods as dependencies. However, if you really wan't to use the container as a factory, and if you have enough control on the objects you want to resolve, you could use property or method injection even if it sounds less natural and more risky in some way. A: Yes, other frameworks are more feature-rich - you need to use an ioc framework that allows for constructor injection. Spring is an example of a multi-language ioc container that allows constructor dependency injection. A: Other IoC frameworks are more feature rich. I.e. check out the ParameterResolution with Autofac A: You can also do that with Windsor easily
{ "language": "en", "url": "https://stackoverflow.com/questions/129345", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Passing in parameter from html element with jQuery I'm working with jQuery for the first time and need some help. I have html that looks like the following: <div id='comment-8' class='comment'> <p>Blah blah</p> <div class='tools'></div> </div> <div id='comment-9' class='comment'> <p>Blah blah something else</p> <div class='tools'></div> </div> I'm trying to use jQuery to add spans to the .tools divs that call variouis functions when clicked. The functions needs to receive the id (either the entire 'comment-8' or just the '8' part) of the parent comment so I can then show a form or other information about the comment. What I have thus far is: <script type='text/javascript'> $(function() { var actionSpan = $('<span>[Do Something]</span>'); actionSpan.bind('click', doSomething); $('.tools').append(actionSpan); }); function doSomething(commentId) { alert(commentId); } </script> I'm stuck on how to populate the commentId parameter for doSomething. Perhaps instead of the id, I should be passing in a reference to the span that was clicked. That would probably be fine as well, but I'm unsure of how to accomplish that. Thanks, Brian A: Event callbacks are called with an event object as the first argument, you can't pass something else in that way. This event object has a target property that references the element it was called for, and the this variable is a reference to the element the event handler was attached to. So you could do the following: function doSomething(event) { var id = $(event.target).parents(".tools").attr("id"); id = substring(id.indexOf("-")+1); alert(id); } ...or: function doSomething(event) { var id = $(this).parents(".tools").attr("id"); id = substring(id.indexOf("-")+1); alert(id); } A: To get from the span up to the surrounding divs, you can use <tt>parent()</tt> (if you know the exact relationship), like this: <tt>$(this).parent().attr('id')</tt>; or if the structure might be more deeply nested, you can use parents() to search up the DOM tree, like this: <tt>$(this).parents('div:eq(0)').attr('id')</tt>. To keep my answer simple, I left off matching the class <tt>"comment"</tt> but of course you could do that if it helps narrow down the div you are searching for. A: You don't have a lot of control over the arguments passed to a bound event handler. Perhaps try something like this for your definition of doSomething(): function doSomething() { var commentId = $(this).parent().attr('id'); alert(commentId); } A: It might be easier to loop through the comments, and add the tool thing to each. That way you can give them each their own function. I've got the function returning a function so that when it's called later, it has the correct comment ID available to it. The other solutions (that navigate back up to find the ID of the parent) will likely be more memory efficient. <script type='text/javascript'> $(function() { $('.comment').each(function(comment) { $('.tools', comment).append( $('<span>[Do Something]</span>') .click(commentTool(comment.id)); ); }); }); function commentTool(commentId) { return function() { alert('Do cool stuff to ' + commentId); } } </script> A: Getting a little fancy to give you an idea of some of the things you can do: var tool = $('<span>[Tool]</span>'); var action = function (id) { return function () { alert('id'); } } $('div.comment').each(function () { var id = $(this).attr('id'); var child = tool.clone(); child.click(action(id)); $('.tools', this).append(child); }); A: The function bind() takes, takes the element as a parameter (in your case the span), so to get the id you want from it you should do some DOM traversal like: function doSomething(eventObject) { var elComment = eventObject.parentNode.parentNode; //or something like that, //didn't test it var commentId= elComment.getAttribute('commentId') alert(commentId); }
{ "language": "en", "url": "https://stackoverflow.com/questions/129360", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Restoring SplitterDistance inside TabControl is inconsistent I'm writing a WinForms application and one of the tabs in my TabControl has a SplitContainer. I'm saving the SplitterDistance in the user's application settings, but the restore is inconsistent. If the tab page with the splitter is visible, then the restore works and the splitter distance is as I left it. If some other tab is selected, then the splitter distance is wrong. A: I found the problem. Each tab page doesn't get resized to match the tab control until it gets selected. For example, if the tab control is 100 pixels wide in the designer, and you've just set it to 500 pixels during load, then setting the splitter distance to 50 on a hidden tab page will get resized to a splitter distance of 250 when you select that tab page. I worked around it by recording the SplitterDistance and Width properties of the SplitContainer in my application settings. Then on restore I set the SplitterDistance to recordedSplitterDistance * Width / recordedWidth. A: As it was mentioned, control with SplitContainer doesn't get resized to match the tab control until it gets selected. If you handle restoring by setting SplitterDistance in percentage (storedDistance * fullDistance / 100) in case of FixedPanel.None, you will see the splitter moving in some time because of precision of calculations. I found another solution for this problem. I subscribes to one of the events, for example Paint event. This event comes after control’s resizing, so the SplitContainer will have correct value. After first restoring you should unsubscribe from this event in order to restore only once: private void MainForm_Load(object sender, EventArgs e) { splitContainerControl.Paint += new PaintEventHandler(splitContainerControl_Paint); } void splitContainerControl_Paint(object sender, PaintEventArgs e) { splitContainerControl.Paint -= splitContainerControl_Paint; // Handle restoration here } A: For handling all cases of FixedPanel and orientation, something like the following should work: var fullDistance = new Func<SplitContainer, int>( c => c.Orientation == Orientation.Horizontal ? c.Size.Height : c.Size.Width); // Store as percentage if FixedPanel.None int distanceToStore = spl.FixedPanel == FixedPanel.Panel1 ? spl.SplitterDistance : spl.FixedPanel == FixedPanel.Panel2 ? fullDistance(spl) - spl.SplitterDistance : (int)(((double)spl.SplitterDistance) / ((double)fullDistance(spl))) * 100; Then do the same when restoring // calculate splitter distance with regard to current control size int distanceToRestore = spl.FixedPanel == FixedPanel.Panel1 ? storedDistance: spl.FixedPanel == FixedPanel.Panel2 ? fullDistance(spl) - storedDistance : storedDistance * fullDistance(spl) / 100; A: I had the same problem. In my particular case, I was using forms, that I transformed into tabpages and added to the tab control. The solution I found, was to set the splitter distances in the Form_Shown event, not in the load event. A: Save the splitter distance as a percentage of the split container height. Then restore the splitter distance percentage using the current split container height. /// <summary> /// Gets or sets the relative size of the top and bottom split window panes. /// </summary> [Browsable(false)] [DesignerSerializationVisibility(DesignerSerializationVisibility.Hidden)] [UserScopedSetting] [DefaultSettingValue(".5")] public double SplitterDistancePercent { get { return (double)toplevelSplitContainer.SplitterDistance / toplevelSplitContainer.Size.Height; } set { toplevelSplitContainer.SplitterDistance = (int)((double)toplevelSplitContainer.Size.Height * value); } } A: There`s a more easy solution. If Panel1 is set as the fixed panel in SplitContainer.FixedPanel property it all behaves as expected. A: Restoring splitter distances has given me a lot of grief too. I have found that restoring them from my user settings in the form (or user control) Load event gave much better results than using the constructor. Trying to do it in the constructor gave me all sorts of weird behaviour. A: Answer is time synchrinizations. You must set SplitterDistance when window is done with size changing. You must then flag for final resize and then set SplitterDistance. In this case is all right A: Set the containing TabPage.Width = TabControl.Width - 8 before setting the SplitContainer.SplitDistance
{ "language": "en", "url": "https://stackoverflow.com/questions/129362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Anybody know where I can get docs or tutorials on VSS 2005 Integration via .net I know that I can add the SourceSafeTypeLib to a project and can explore it in object browser and find obvious things (GetLatest, etc), but I am looking for some more thorough documentation or specific tutorials on things like "undo another user's checkout" or"determine who has a file checked out. If anyone knows where to find this material, how to do advanced or non-obvious tasks with VSS, or knows how to disassemble a COM api (so I can engineer my own api) it would be much appreciated. A: You might check out Microsoft's documentation on the Microsoft.VisualStudio.SourceSafe.Interop namespace (I assume that's what you've looked at). I used it to create a VB.NET utility that does get latest, check-outs, and check-ins against a VSS 2005 database. A quick perusal revealed the IVSSItem.UndoCheckout method, and the IVSSCheckouts type, which is a collection of checkouts for a given file. A: You can also have a look at Visual SourceSafe Automation article at MSDN.
{ "language": "en", "url": "https://stackoverflow.com/questions/129382", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: GetElementsByTagName functionality from .net code-behind page? I am writing a webpage in C# .NET. In javascript there is a function called GetElementsByTagName... this is nice for javascript invoked from the .aspx page. My question is, is there any way I can have this kind of functionality from my C# code-behind? -- The scenario for those curious: I used an asp:repeater to generate a lot of buttons, and now I'm essentially trying to make a button that clicks them all. I tried storing all the buttons in a list as I created them, but the list is getting cleared during every postback, so I thought I could try the above method. A: Try this: foreach (Control ctl in myRepeater.Controls) { if (ctl is Button) { ((Button)ctl).Click(); } } HTH... A: FindControl(), or iterate through the controls on the page... For each ctl as Control in Me.Controls If ctl.Name = whatYouWant Then do stuff Next 'ctl --If you are creating the controls, you should be setting their ID's Dim ctl as New Control() ctl.ID = "blah1" etc... A: Well, you can find controls with the page's FindControl method, but Repeater elements have names generated by .net. As an aside, if you really want to, you could store the list of buttons in your page's ViewState (or perhaps a list of their names). A: Whenever you do any postback, everything is recreated, including your databound controls. If your list is gone, so are the button controls. Unless, of course, you've recreated them, and in that case you should have recreated the list as well. A: I don't know exactly what you mean by clicks them all. But how would this following code work for you? I don't know, I haven't tested... protected void Page_Load(object sender, EventArgs e) { foreach (Control control in GetControlsByType(this, typeof(TextBox))) { //Do something? } } public static System.Collections.Generic.List<Control> GetControlsByType(Control ctrl, Type t) { System.Collections.Generic.List<Control> cntrls = new System.Collections.Generic.List<Control>(); foreach (Control child in ctrl.Controls) { if (t == child.GetType()) cntrls.Add(child); cntrls.AddRange(GetControlsByType(child, t)); } return cntrls; } A: ASPX: <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Untitled Page</title> </head> <body> <form id="form1" runat="server"> <asp:Repeater runat="server" ID="Repeater1"> <ItemTemplate> <asp:Button runat="server" ID="Button1" Text="I was NOT changed" /> </ItemTemplate> </asp:Repeater> </form> </body> </html> ASPX.CS: using System; using System.Data; using System.Web.UI; using System.Web.UI.WebControls; public partial class _Default : System.Web.UI.Page { protected void Page_Load(Object sender, EventArgs e) { DataTable dt = new DataTable(); dt.Columns.Add(new DataColumn("column")); DataRow dr = null; for (Int32 i = 0; i < 10; i++) { dr = dt.NewRow(); dr["column"] = ""; dt.Rows.Add(dr); } this.Repeater1.DataSource = dt; this.Repeater1.DataBind(); foreach (RepeaterItem ri in this.Repeater1.Controls) { foreach (Control c in ri.Controls) { Button b = new Button(); try { b = (Button)c; } catch (Exception exc) { } b.Text = "I was found and changed"; } } } } A: Or a variation of my own code, only changing the ASPX.CS: using System; using System.Data; using System.Web.UI; using System.Web.UI.WebControls; using System.Collections.Generic; public partial class _Default : System.Web.UI.Page { protected void Page_Load(Object sender, EventArgs e) { #region Fill Repeater1 with some dummy data DataTable dt = new DataTable(); dt.Columns.Add(new DataColumn("column")); DataRow dr = null; for (Int32 i = 0; i < 10; i++) { dr = dt.NewRow(); dr["column"] = ""; dt.Rows.Add(dr); } this.Repeater1.DataSource = dt; this.Repeater1.DataBind(); #endregion foreach (Button b in this.FindButtonsInRepeater(ref this.Repeater1)) { b.Text = "I was found and changed"; } } private List<Button> FindButtonsInRepeater(ref Repeater repeater) { List<Button> buttonsFound = new List<Button>(); foreach (RepeaterItem ri in repeater.Controls) { foreach (Control c in ri.Controls) { try { buttonsFound.Add((Button)c); } catch (Exception exc) { } } } return buttonsFound; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/129388", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do you do a deep copy of an object in .NET? I want a true deep copy. In Java, this was easy, but how do you do it in C#? A: You can try this public static object DeepCopy(object obj) { if (obj == null) return null; Type type = obj.GetType(); if (type.IsValueType || type == typeof(string)) { return obj; } else if (type.IsArray) { Type elementType = Type.GetType( type.FullName.Replace("[]", string.Empty)); var array = obj as Array; Array copied = Array.CreateInstance(elementType, array.Length); for (int i = 0; i < array.Length; i++) { copied.SetValue(DeepCopy(array.GetValue(i)), i); } return Convert.ChangeType(copied, obj.GetType()); } else if (type.IsClass) { object toret = Activator.CreateInstance(obj.GetType()); FieldInfo[] fields = type.GetFields(BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance); foreach (FieldInfo field in fields) { object fieldValue = field.GetValue(obj); if (fieldValue == null) continue; field.SetValue(toret, DeepCopy(fieldValue)); } return toret; } else throw new ArgumentException("Unknown type"); } Thanks to DetoX83 article on code project. A: Important Note BinaryFormatter has been deprecated, and will no longer be available in .NET after November 2023. See BinaryFormatter Obsoletion Strategy I've seen a few different approaches to this, but I use a generic utility method as such: public static T DeepClone<T>(this T obj) { using (var ms = new MemoryStream()) { var formatter = new BinaryFormatter(); formatter.Serialize(ms, obj); ms.Position = 0; return (T) formatter.Deserialize(ms); } } Notes: * *Your class MUST be marked as [Serializable] for this to work. *Your source file must include the following code: using System.Runtime.Serialization.Formatters.Binary; using System.IO; A: You can use Nested MemberwiseClone to do a deep copy. Its almost the same speed as copying a value struct, and its an order of magnitude faster than (a) reflection or (b) serialization (as described in other answers on this page). Note that if you use Nested MemberwiseClone for a deep copy, you have to manually implement a ShallowCopy for each nested level in the class, and a DeepCopy which calls all said ShallowCopy methods to create a complete clone. This is simple: only a few lines in total, see the demo code below. Here is the output of the code showing the relative performance difference (4.77 seconds for deep nested MemberwiseCopy vs. 39.93 seconds for Serialization). Using nested MemberwiseCopy is almost as fast as copying a struct, and copying a struct is pretty darn close to the theoretical maximum speed .NET is capable of, which is probably quite close to the speed of the same thing in C or C++ (but would have to run some equivalent benchmarks to check this claim). Demo of shallow and deep copy, using classes and MemberwiseClone: Create Bob Bob.Age=30, Bob.Purchase.Description=Lamborghini Clone Bob >> BobsSon Adjust BobsSon details BobsSon.Age=2, BobsSon.Purchase.Description=Toy car Proof of deep copy: If BobsSon is a true clone, then adjusting BobsSon details will not affect Bob: Bob.Age=30, Bob.Purchase.Description=Lamborghini Elapsed time: 00:00:04.7795670,30000000 Demo of shallow and deep copy, using structs and value copying: Create Bob Bob.Age=30, Bob.Purchase.Description=Lamborghini Clone Bob >> BobsSon Adjust BobsSon details: BobsSon.Age=2, BobsSon.Purchase.Description=Toy car Proof of deep copy: If BobsSon is a true clone, then adjusting BobsSon details will not affect Bob: Bob.Age=30, Bob.Purchase.Description=Lamborghini Elapsed time: 00:00:01.0875454,30000000 Demo of deep copy, using class and serialize/deserialize: Elapsed time: 00:00:39.9339425,30000000 To understand how to do a deep copy using MemberwiseCopy, here is the demo project: // Nested MemberwiseClone example. // Added to demo how to deep copy a reference class. [Serializable] // Not required if using MemberwiseClone, only used for speed comparison using serialization. public class Person { public Person(int age, string description) { this.Age = age; this.Purchase.Description = description; } [Serializable] // Not required if using MemberwiseClone public class PurchaseType { public string Description; public PurchaseType ShallowCopy() { return (PurchaseType)this.MemberwiseClone(); } } public PurchaseType Purchase = new PurchaseType(); public int Age; // Add this if using nested MemberwiseClone. // This is a class, which is a reference type, so cloning is more difficult. public Person ShallowCopy() { return (Person)this.MemberwiseClone(); } // Add this if using nested MemberwiseClone. // This is a class, which is a reference type, so cloning is more difficult. public Person DeepCopy() { // Clone the root ... Person other = (Person) this.MemberwiseClone(); // ... then clone the nested class. other.Purchase = this.Purchase.ShallowCopy(); return other; } } // Added to demo how to copy a value struct (this is easy - a deep copy happens by default) public struct PersonStruct { public PersonStruct(int age, string description) { this.Age = age; this.Purchase.Description = description; } public struct PurchaseType { public string Description; } public PurchaseType Purchase; public int Age; // This is a struct, which is a value type, so everything is a clone by default. public PersonStruct ShallowCopy() { return (PersonStruct)this; } // This is a struct, which is a value type, so everything is a clone by default. public PersonStruct DeepCopy() { return (PersonStruct)this; } } // Added only for a speed comparison. public class MyDeepCopy { public static T DeepCopy<T>(T obj) { object result = null; using (var ms = new MemoryStream()) { var formatter = new BinaryFormatter(); formatter.Serialize(ms, obj); ms.Position = 0; result = (T)formatter.Deserialize(ms); ms.Close(); } return (T)result; } } Then, call the demo from main: void MyMain(string[] args) { { Console.Write("Demo of shallow and deep copy, using classes and MemberwiseCopy:\n"); var Bob = new Person(30, "Lamborghini"); Console.Write(" Create Bob\n"); Console.Write(" Bob.Age={0}, Bob.Purchase.Description={1}\n", Bob.Age, Bob.Purchase.Description); Console.Write(" Clone Bob >> BobsSon\n"); var BobsSon = Bob.DeepCopy(); Console.Write(" Adjust BobsSon details\n"); BobsSon.Age = 2; BobsSon.Purchase.Description = "Toy car"; Console.Write(" BobsSon.Age={0}, BobsSon.Purchase.Description={1}\n", BobsSon.Age, BobsSon.Purchase.Description); Console.Write(" Proof of deep copy: If BobsSon is a true clone, then adjusting BobsSon details will not affect Bob:\n"); Console.Write(" Bob.Age={0}, Bob.Purchase.Description={1}\n", Bob.Age, Bob.Purchase.Description); Debug.Assert(Bob.Age == 30); Debug.Assert(Bob.Purchase.Description == "Lamborghini"); var sw = new Stopwatch(); sw.Start(); int total = 0; for (int i = 0; i < 100000; i++) { var n = Bob.DeepCopy(); total += n.Age; } Console.Write(" Elapsed time: {0},{1}\n", sw.Elapsed, total); } { Console.Write("Demo of shallow and deep copy, using structs:\n"); var Bob = new PersonStruct(30, "Lamborghini"); Console.Write(" Create Bob\n"); Console.Write(" Bob.Age={0}, Bob.Purchase.Description={1}\n", Bob.Age, Bob.Purchase.Description); Console.Write(" Clone Bob >> BobsSon\n"); var BobsSon = Bob.DeepCopy(); Console.Write(" Adjust BobsSon details:\n"); BobsSon.Age = 2; BobsSon.Purchase.Description = "Toy car"; Console.Write(" BobsSon.Age={0}, BobsSon.Purchase.Description={1}\n", BobsSon.Age, BobsSon.Purchase.Description); Console.Write(" Proof of deep copy: If BobsSon is a true clone, then adjusting BobsSon details will not affect Bob:\n"); Console.Write(" Bob.Age={0}, Bob.Purchase.Description={1}\n", Bob.Age, Bob.Purchase.Description); Debug.Assert(Bob.Age == 30); Debug.Assert(Bob.Purchase.Description == "Lamborghini"); var sw = new Stopwatch(); sw.Start(); int total = 0; for (int i = 0; i < 100000; i++) { var n = Bob.DeepCopy(); total += n.Age; } Console.Write(" Elapsed time: {0},{1}\n", sw.Elapsed, total); } { Console.Write("Demo of deep copy, using class and serialize/deserialize:\n"); int total = 0; var sw = new Stopwatch(); sw.Start(); var Bob = new Person(30, "Lamborghini"); for (int i = 0; i < 100000; i++) { var BobsSon = MyDeepCopy.DeepCopy<Person>(Bob); total += BobsSon.Age; } Console.Write(" Elapsed time: {0},{1}\n", sw.Elapsed, total); } Console.ReadKey(); } Again, note that if you use Nested MemberwiseClone for a deep copy, you have to manually implement a ShallowCopy for each nested level in the class, and a DeepCopy which calls all said ShallowCopy methods to create a complete clone. This is simple: only a few lines in total, see the demo code above. Note that when it comes to cloning an object, there is is a big difference between a "struct" and a "class": * *If you have a "struct", it's a value type so you can just copy it, and the contents will be cloned. *If you have a "class", it's a reference type, so if you copy it, all you are doing is copying the pointer to it. To create a true clone, you have to be more creative, and use a method which creates another copy of the original object in memory. *Cloning objects incorrectly can lead to very difficult-to-pin-down bugs. In production code, I tend to implement a checksum to double check that the object has been cloned properly, and hasn't been corrupted by another reference to it. This checksum can be switched off in Release mode. *I find this method quite useful: often, you only want to clone parts of the object, not the entire thing. It's also essential for any use case where you are modifying objects, then feeding the modified copies into a queue. Update It's probably possible to use reflection to recursively walk through the object graph to do a deep copy. WCF uses this technique to serialize an object, including all of its children. The trick is to annotate all of the child objects with an attribute that makes it discoverable. You might lose some performance benefits, however. Update Quote on independent speed test (see comments below): I've run my own speed test using Neil's serialize/deserialize extension method, Contango's Nested MemberwiseClone, Alex Burtsev's reflection-based extension method and AutoMapper, 1 million times each. Serialize-deserialize was slowest, taking 15.7 seconds. Then came AutoMapper, taking 10.1 seconds. Much faster was the reflection-based method which took 2.4 seconds. By far the fastest was Nested MemberwiseClone, taking 0.1 seconds. Comes down to performance versus hassle of adding code to each class to clone it. If performance isn't an issue go with Alex Burtsev's method. – Simon Tewsi A: I wrote a deep object copy extension method, based on recursive "MemberwiseClone". It is fast (three times faster than BinaryFormatter), and it works with any object. You don't need a default constructor or serializable attributes. Source code: using System.Collections.Generic; using System.Reflection; using System.ArrayExtensions; namespace System { public static class ObjectExtensions { private static readonly MethodInfo CloneMethod = typeof(Object).GetMethod("MemberwiseClone", BindingFlags.NonPublic | BindingFlags.Instance); public static bool IsPrimitive(this Type type) { if (type == typeof(String)) return true; return (type.IsValueType & type.IsPrimitive); } public static Object Copy(this Object originalObject) { return InternalCopy(originalObject, new Dictionary<Object, Object>(new ReferenceEqualityComparer())); } private static Object InternalCopy(Object originalObject, IDictionary<Object, Object> visited) { if (originalObject == null) return null; var typeToReflect = originalObject.GetType(); if (IsPrimitive(typeToReflect)) return originalObject; if (visited.ContainsKey(originalObject)) return visited[originalObject]; if (typeof(Delegate).IsAssignableFrom(typeToReflect)) return null; var cloneObject = CloneMethod.Invoke(originalObject, null); if (typeToReflect.IsArray) { var arrayType = typeToReflect.GetElementType(); if (IsPrimitive(arrayType) == false) { Array clonedArray = (Array)cloneObject; clonedArray.ForEach((array, indices) => array.SetValue(InternalCopy(clonedArray.GetValue(indices), visited), indices)); } } visited.Add(originalObject, cloneObject); CopyFields(originalObject, visited, cloneObject, typeToReflect); RecursiveCopyBaseTypePrivateFields(originalObject, visited, cloneObject, typeToReflect); return cloneObject; } private static void RecursiveCopyBaseTypePrivateFields(object originalObject, IDictionary<object, object> visited, object cloneObject, Type typeToReflect) { if (typeToReflect.BaseType != null) { RecursiveCopyBaseTypePrivateFields(originalObject, visited, cloneObject, typeToReflect.BaseType); CopyFields(originalObject, visited, cloneObject, typeToReflect.BaseType, BindingFlags.Instance | BindingFlags.NonPublic, info => info.IsPrivate); } } private static void CopyFields(object originalObject, IDictionary<object, object> visited, object cloneObject, Type typeToReflect, BindingFlags bindingFlags = BindingFlags.Instance | BindingFlags.NonPublic | BindingFlags.Public | BindingFlags.FlattenHierarchy, Func<FieldInfo, bool> filter = null) { foreach (FieldInfo fieldInfo in typeToReflect.GetFields(bindingFlags)) { if (filter != null && filter(fieldInfo) == false) continue; if (IsPrimitive(fieldInfo.FieldType)) continue; var originalFieldValue = fieldInfo.GetValue(originalObject); var clonedFieldValue = InternalCopy(originalFieldValue, visited); fieldInfo.SetValue(cloneObject, clonedFieldValue); } } public static T Copy<T>(this T original) { return (T)Copy((Object)original); } } public class ReferenceEqualityComparer : EqualityComparer<Object> { public override bool Equals(object x, object y) { return ReferenceEquals(x, y); } public override int GetHashCode(object obj) { if (obj == null) return 0; return obj.GetHashCode(); } } namespace ArrayExtensions { public static class ArrayExtensions { public static void ForEach(this Array array, Action<Array, int[]> action) { if (array.LongLength == 0) return; ArrayTraverse walker = new ArrayTraverse(array); do action(array, walker.Position); while (walker.Step()); } } internal class ArrayTraverse { public int[] Position; private int[] maxLengths; public ArrayTraverse(Array array) { maxLengths = new int[array.Rank]; for (int i = 0; i < array.Rank; ++i) { maxLengths[i] = array.GetLength(i) - 1; } Position = new int[array.Rank]; } public bool Step() { for (int i = 0; i < Position.Length; ++i) { if (Position[i] < maxLengths[i]) { Position[i]++; for (int j = 0; j < i; j++) { Position[j] = 0; } return true; } } return false; } } } } A: The best way is: public interface IDeepClonable<T> where T : class { T DeepClone(); } public class MyObj : IDeepClonable<MyObj> { public MyObj Clone() { var myObj = new MyObj(); myObj._field1 = _field1;//value type myObj._field2 = _field2;//value type myObj._field3 = _field3;//value type if (_child != null) { myObj._child = _child.DeepClone(); //reference type .DeepClone() that does the same } int len = _array.Length; myObj._array = new MyObj[len]; // array / collection for (int i = 0; i < len; i++) { myObj._array[i] = _array[i]; } return myObj; } private bool _field1; public bool Field1 { get { return _field1; } set { _field1 = value; } } private int _field2; public int Property2 { get { return _field2; } set { _field2 = value; } } private string _field3; public string Property3 { get { return _field3; } set { _field3 = value; } } private MyObj _child; private MyObj Child { get { return _child; } set { _child = value; } } private MyObj[] _array = new MyObj[4]; } A: I believe that the BinaryFormatter approach is relatively slow (which came as a surprise to me!). You might be able to use ProtoBuf .NET for some objects if they meet the requirements of ProtoBuf. From the ProtoBuf Getting Started page (http://code.google.com/p/protobuf-net/wiki/GettingStarted): Notes on types supported: Custom classes that: * *Are marked as data-contract *Have a parameterless constructor *For Silverlight: are public *Many common primitives, etc. *Single dimension arrays: T[] *List<T> / IList<T> *Dictionary<TKey, TValue> / IDictionary<TKey, TValue> *any type which implements IEnumerable<T> and has an Add(T) method The code assumes that types will be mutable around the elected members. Accordingly, custom structs are not supported, since they should be immutable. If your class meets these requirements you could try: public static void deepCopy<T>(ref T object2Copy, ref T objectCopy) { using (var stream = new MemoryStream()) { Serializer.Serialize(stream, object2Copy); stream.Position = 0; objectCopy = Serializer.Deserialize<T>(stream); } } Which is VERY fast indeed... Edit: Here is working code for a modification of this (tested on .NET 4.6). It uses System.Xml.Serialization and System.IO. No need to mark classes as serializable. public void DeepCopy<T>(ref T object2Copy, ref T objectCopy) { using (var stream = new MemoryStream()) { var serializer = new XS.XmlSerializer(typeof(T)); serializer.Serialize(stream, object2Copy); stream.Position = 0; objectCopy = (T)serializer.Deserialize(stream); } } A: Maybe you only need a shallow copy, in that case use Object.MemberWiseClone(). There are good recommendations in the documentation for MemberWiseClone() for strategies to deep copy: - http://msdn.microsoft.com/en-us/library/system.object.memberwiseclone.aspx A: Building on Kilhoffer's solution... With C# 3.0 you can create an extension method as follows: public static class ExtensionMethods { // Deep clone public static T DeepClone<T>(this T a) { using (MemoryStream stream = new MemoryStream()) { BinaryFormatter formatter = new BinaryFormatter(); formatter.Serialize(stream, a); stream.Position = 0; return (T) formatter.Deserialize(stream); } } } which extends any class that's been marked as [Serializable] with a DeepClone method MyClass copy = obj.DeepClone(); A: The MSDN documentation seems to hint that Clone should perform a deep copy, but it is never explicitly stated: The ICloneable interface contains one member, Clone, which is intended to support cloning beyond that supplied by MemberWiseClone… The MemberwiseClone method creates a shallow copy… You can find my post helpful. http://pragmaticcoding.com/index.php/cloning-objects-in-c/ A: public static object CopyObject(object input) { if (input != null) { object result = Activator.CreateInstance(input.GetType()); foreach (FieldInfo field in input.GetType().GetFields(Consts.AppConsts.FullBindingList)) { if (field.FieldType.GetInterface("IList", false) == null) { field.SetValue(result, field.GetValue(input)); } else { IList listObject = (IList)field.GetValue(result); if (listObject != null) { foreach (object item in ((IList)field.GetValue(input))) { listObject.Add(CopyObject(item)); } } } } return result; } else { return null; } } This way is a few times faster than BinarySerialization AND this does not require the [Serializable] attribute.
{ "language": "en", "url": "https://stackoverflow.com/questions/129389", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "682" }
Q: How do I add Active Directory support to Windows PE? I want to query Active Directory from Windows PE 2.0, which is not supported "out of the box." Microsoft seems to suggest that this is possible, but not with any tools they provide. What do you recommend? A: There seem to be instructions here, and the author claims to query AD from WinPE. http://www.clientarchitect.com/blog1.php/2008/06/18/windows-pe-2-0-ad-scripting-requirements A: i recently needed to use a connection to AD from WinPE to retrieve some computer informations, i tested the above solution and other one with ADSI but not working for me in ADK 1709. My final solution is using WMI on a DC with differed Credentials so can get all i need just by one line :) (Get-WmiObject -Namespace 'root\directory\ldap' -Query "Select DS_info from DS_computer where DS_cn = $($AccountName)" -ComputerName $Domain -Credential $myADCred).$($Myattribute) $AccountName : is the name of the computer i am searching in AD $Domain : fqdn name that pointing to your DC ex:(xyz.youtdomain.com) $MyADCred : is a credential object containing user and password with the necessary rights on AD $myattribute : is the info i am searching from the computer in AD. have a nice deployment :) Yassine A: Installing the ADSI package from deployvista.com solved the problem for me, but your mileage may vary.
{ "language": "en", "url": "https://stackoverflow.com/questions/129391", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Can I use DoxyGen to document ActionScript code? How do I Configuring DoxyGen to document ActionScript files? I've included the *.as and *.asi files in doxygen's search pattern, but the classes, functions and variables don't show there. A: I've been able to produce SOME documentation with DoxyGen (What can I say - I like its features and capabilities) By doing the following: Add *.as and *.asi to the list of file types to input. Select: OPTIMIZE_OUTPUT_JAVA = YES EXTRACT_ALL = YES HIDE_UNDOC_MEMBERS = NO HIDE_UNDOC_CLASSES = NO Another issue in AS3 is the package statement. You need to tell DoxyGen to ignore the package definition. This is easy to do using cond. So you'll change the line: package myPackage { into /// @cond package myPackage { /// @endcond Which will cause Doxygen to ignore the line(s) between cond and endcond. Note that there seems to be no need to do the same for the closing curly bracket at the bottom of your .as file. A: You can also use Ortelius. Its easier to use than ASDoc since it comes with a simple GUI, and its more forgiven to your code. Its free and opensource, but windows only. ortelius.marten.dk A: Instead of doxygen you should use a documentation generator that specifically supports the language. For ActionScript 2, you have a couple choices: * *NaturalDocs (example) (free) *ZenDoc (free) *AS2Doc Pro (example) (commercial) If you are using ActionScript 3, Adobe includes a free documentation generator along with their open source compiler (the Flex SDK), called "ASDoc". If you are using FlashDevelop, the latest beta has a built in GUI for running ASDoc, so you don't have to dirty your hands with the commandline.
{ "language": "en", "url": "https://stackoverflow.com/questions/129405", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Why is there a gap between my image and its containing box? When my browser renders the following test case, there's a gap below the image. From my understanding of CSS, the bottom of the blue box should touch the bottom of the red box. But that's not the case. Why? <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en"> <head> <title>foo</title> </head> <body> <div style="border: solid blue 2px; padding: 0px;"> <img alt='' style="border: solid red 2px; margin: 0px;" src="http://stackoverflow.com/Content/Img/stackoverflow-logo-250.png" /> </div> </body> </html> A: Because the image is inline it sits on the baseline. Try vertical-align: bottom; Alternately, in IE sometimes if you have whitespace around an image you get that. So if you remove all the whitespace between the div and img tags, that may resolve it. A: line-height: 0; on the parent DIV fixes this for me. Presumably, this means the default line-height is not 0. A: Inline elements are vertically aligned to the baseline, not the very bottom of the containing box. This is because text needs a small amount of space underneath for descenders - the tails on letters like lowercase 'p'. So there is an imaginary line a short distance above the bottom, called the baseline, and inline elements are vertically aligned with it by default. There's two ways of fixing this problem. You can either specify that the image should be vertically aligned to the bottom, or you can set it to be a block element, in which case it is no longer treated as a part of the text. In addition to this, Internet Explorer has an HTML parsing bug that does not ignore trailing whitespace after a closing element, so removing this whitespace may be necessary if you are having problems with Internet Explorer compatibility. A: display: block in the image fixes it as well, but probably breaks it in other ways ;) A: Remove the line break before the tag, so that it directly follows the tag with no blanks between it. I don't know why, but for the Internet Explorer, this works. A: font-size:0; on the parent DIV is another tricky way to fix it.
{ "language": "en", "url": "https://stackoverflow.com/questions/129406", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Passing Exceptions to an error screen in ASP.net/C# Coming from a desktop background I'm not sure exactly how to pass the exceptions I have caught to an Error page in order to avoid the standard exception screen being seen by my users. My general question is how do I pass the exception from page X to my Error page in ASP.net? A: I suggest using the customErrors section in the web.config: <customErrors mode="RemoteOnly" defaultRedirect="/error.html"> <error statusCode="403" redirect="/accessdenied.html" /> <error statusCode="404" redirect="/pagenotfound.html" /> </customErrors> And then using ELMAH to email and/or log the error. A: The pattern I use is to log the error in a try/catch block (using log4net), then do a response.redirect to a simple error page. This assumes you don't need to show any error details. If you need the exception details on a separate page, you might want to look at Server.GetLastError. I use that in global.asax (in the Application_Error event) to log unhandled exceptions and redirect to an error page. A: We've had good luck capturing exceptions in the Global.asax Application_Error event, storing them in session, and redirecting to our error page. Alternately you could encode the error message and pass it to the error page in the querystring. A: You can also get the exception from Server.GetLastError(); A: Use the custom error pages in asp.net, you can find it in the customError section in the web.config A: We capture the exception in the Global.asax file, store it in Session, the user is then redirected to the Error Page where we grab the exception for our Session variable and display the Message information to the user. protected void Application_Error(object sender, EventArgs e) { Exception ex = Server.GetLastError(); this.Session[CacheProvider.ToCacheKey(CacheKeys.LastError)] = ex; } We do log the error message prior to displaying it the user. A: I think you can use the global.asax -- Application_Exception handler to catch the exception and then store it for displaying in an error page. But actually, your error page shouldn't contains code that might cause just another error. It should be simple "Oops! something went wrong" page. If you want details on the error, use Windows' events viewer or ELMAH or employ some logging mechanism.
{ "language": "en", "url": "https://stackoverflow.com/questions/129417", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do I output progress messages from a SELECT statement? I have a SQL script that I want to output progress messages as it runs. Having it output messages between SQL statements is easy, however I have some very long running INSERT INTO SELECTs. Is there a way to have a select statement output messages as it goes, for example after every 1000 rows, or every 5 seconds? Note: This is for SQL Anywhere, but answers in any SQL dialect will be fine. A: There's no way to retrieve the execution status of a single query. None of the mainstream database engines provide this functionality. Furthermore, a measurable overhead would be generated from any progress implementation were one to exist, so if a query is already taking an uncomfortably long time such that you want to show progress, causing additional slowdown by showing said progress might not be a design goal. You may find this article on estimating SQL execution progress helpful, though its practical implications are limited. A: SQL itself has no provision for this kind of thing. Any way of doing this would involve talking directly to the database engine, and would not be standard across databases. A: Really the idea of progress with set based operations (which is what a relational database uses) wouldn't be too helpful, at least not as displayed with a progress bar (percent done vs total). By the time the optimizer figured out what it needed to do and really understood the full cost of the operation, you have already completed a significant portion of the operation. Progress displays are really meant for iterative operations rather than set operations. That's talking about your general SELECT statement execution. For inserts that are separate statements there are all kinds of ways to do that from the submitter by monitoring the consumption rate of the statements. If they are bulk inserts (select into, insert from, and the like) then you really have the same problem that I described above. Set operations are batched in a way that make a progress bar type of display somewhat meaningless. A: I am on the SQL Anywhere engine development team and there is currently no way to do this. I can't promise anything, but we are considering adding this type of functionality to a future release. A: There's certainly no SQL-standard solution to this. Sorry to be doom-laden, but I haven't seen anything that can do this in Oracle, SQL Server, Sybase or MySQL, so I wouldn't be too hopeful for SQLAnywhere. A: I agree that SQL does not have a way to do this directly. One way might be to only insert the TOP 1000 at a time and then print your status message. Then keep repeating this as needed (in a loop of some kind). The downside is that you would then need a way to keep track of where you are. I should note that this approach will not be as efficient as just doing one big INSERT A: Here's what I would do (Sybase / SQL Server syntax): DECLARE @total_rows int SELECT @total_rows = count(*) FROM Source_Table WHILE @total_rows > (SELECT count(*) FROM Target_Table) BEGIN SET rowcount 1000 print 'inserting 1000 rows' INSERT Target_Table SELECT * FROM Source_Table s WHERE NOT EXISTS( SELECT 1 FROM Target_Table t WHERE t.id = s.id ) END set rowcount 0 print 'done' Or you could do it based on IDs (assumes Id is a number): DECLARE @min_id int, @max_id int, @start_id int, @end_id int SELECT @min_id = min(id) , @max_id = max(id) FROM Source_Table SELECT @start_id = @min_id , @end_id = @min_id + 1000 WHILE @end_id <= @max_id BEGIN print 'inserting id range: ' + convert(varchar,@start_id) + ' to ' + convert(varchar,@end_id) INSERT Target_Table SELECT * FROM Source_Table s WHERE id BETWEEN @start_id AND @end_id SELECT @start_id = @end_id + 1, @end_id = @end_id + 1000 END set rowcount 0 print 'done' A: One thought might to have another separate process count the number of rows in the table where the insert is being done to determine what percentage of them are there already. This of course would require that you know the total in the end. This would probably only be okay if this you're not too worried about server load. A: On the off chance you're using Toad, you can generate a set of INSERT statements from a table and configure it to commit at a user input frequency. You could modify your scripts a little bit and then see how much of the new data has been commited as you go. A: You can simulate the effect for your users by timing several runs, then having a progress bar advance at the average records / second rate. The only other ways will be 1 - Refer to the API of your database engine to see if it makes any provision for that or 2 - Break your INSERT into many smaller statements, and report on them as you go. But that will have a significant negative performance impact. A: If you need to have it or you die, for insert,update,delete you can use some trigger logic with db variables, and time by time you do sql to retrieve variable data and display some progress to user. If you wan`t to use it, I can write an example and send it. A: Stumbled upon this old thread looking for something else. I disagree with the idea that we don't want progress information just because it's a set operation. Users will often tolerate even a long wait if they know how long it is. Here's what I suggest: Each time this runs, log the number of rows inserted and the total time, then add a step at the beginning of that process to query that log and calculate an estimated total time. If you base your estimate on the last runs, you should be able to present an acceptably good guess for the wait time for the thing to finish.
{ "language": "en", "url": "https://stackoverflow.com/questions/129437", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: What is a good project to work on to learn modern patterns and practices? I'm trying to teach myself how to use Modern Persistence Patterns (OR/M, Repository, etc) and development practices (TDD, etc). Because the best way (for me) to learn is by doing, I'd like to build some sort of demo application for myself. The problem is, I've got no idea what sort of application to build. I'd like to blog about my experience, so I'd like to build something of some worth to the community, but at the same time I want to avoid things that others are actively doing (web commerce, forums) or have been done to death (blog engines). Does anybody have any suggestions for a good pet project I could work on and maybe blog about my experiences with? A: I would say an excellent way is to start with the sample project for a core framework you want to learn or build your application around. Using Spring as an example, they have a great 'pet store' web application that you can download that shows how to use many different parts of the framework in the recommended way. From there, you can expand on it: check it into source control, get automated builds going, add your own unit tests or test-first additions, swap in your own ORM layer, try different view layers, etc. Once you have everything working as you want, then you can branch off more easily and even create your own app from the ground up using what you've learned. I find starting with a good base ('good' being important, as you want to learn the best practices and not just base your work on something hacked together by a random internet user) and building out really helps, as opposed to just starting with a blank project which can be overwhelming especially if trying to learn a bunch of new things at once. A: There are innumerable community-service organizations with little or no web presence. Pick a service organization -- any one -- Literacy Volunteers, Food Pantries, Home Furnishings Donations, Alcoholics Anonymous -- anything. The grass-roots community organizations benefit the most from involvement; they often need a more dynamic web presence but can't afford it. Look at their current web site. Build them something better. Donate it to them. A: Of course you could spend 6 monthes to choose an open source project and start little by little to be accepted and understands how the contribution system works. But the best way is still to start your own project, with your own standards, that will probably be a faillure. You need to try, fail, and learn from mistakes to improve, using what you want to practice on. Like a french writer said : "A seated genious will always go less far away than a walking dumbass". A: How about a website where people can ask tech oriented questions, and get responses from the collective expert community on the internets? I think the most important aspect of a pet project is the fact that it HAS to be something that you care about and will use yourself. If you use it, and it is helpful to you, then others will find the same. If you are working on something because someone suggested it, then it becomes like work. To play with TDD, I ended up creating a command line argument parser. I write a lot of console apps, and it was something that could benefit from, was interested in, and was fun for me. There are already others out there, but that wasn't really the point for me. I too intended to blog about it, but my other pet project for playing with patterns and architecture was a "done to death" blog platform... and blogging about writing a blogging platform using a blogging platform you are creating... well, that's hard. In the end, neither of my projects brought much to the community at this point, but I've noticed the rewards in how I attack problems. Find something that you can benefit from, and worry about the benefits to others later. Be a little selfish. A: This is a very good question. I feel that this must be the feeling of many developers. Many a times we are restricted by the applications that we are developing at work. There may not be the opportunity to implement every greatest and latest thing. I have the similar feelings. What I do is, I persuade my team to learn new things and share knowledge about the new technologies. I have started building my own some kind of project. It has very less real time use, but I can play with it. For instance I am using EntLib at DAL, but tomorrow when I manage to learn LINQ, I will replace EntLib with LINQ. Probably LINQ to Entities. Then I exposed these DAL method using plain WCF. Then I learnt how to implement WebHttpBinding and exposed my WCF service using JSON. I have plan to now learn MS MVC and jQuery and do some ASP.Net/AJAX stuff in there. Basically you should target the problem one at a time in small chunks. If you have time and zeal then first solution suggested here seems to be the best. Good luck!!!
{ "language": "en", "url": "https://stackoverflow.com/questions/129438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: postgreSQL - psql \i : how to execute script in a given path I'm new to postgreSQL and I have a simple question: I'm trying to create a simple script that creates a DB so I can later call it like this: psql -f createDB.sql I want the script to call other scripts (separate ones for creating tables, adding constraints, functions etc), like this: \i script1.sql \i script2.sql It works fine provided that createDB.sql is in the same dir. But if I move script2 to a directory under the one with createDB, and modify the createDB so it looks like this: \i script1.sql \i somedir\script2.sql I get an error: psql:createDB.sql:2: somedir: Permission denied I'm using Postgres Plus 8.3 for windows, default postgres user. EDIT: Silly me, unix slashes solved the problem. A: Have you tried using Unix style slashes (/ instead of \)? \ is often an escape or command character, and may be the source of confusion. I have never had issues with this, but I also do not have Windows, so I cannot test it. Additionally, the permissions may be based on the user running psql, or maybe the user executing the postmaster service, check that both have read to that file in that directory. A: Try this, I work myself to do so \i 'somedir\\script2.sql' A: Postgres started on Linux/Unix. I suspect that reversing the slash with fix it. \i somedir/script2.sql If you need to fully qualify something \i c:/somedir/script2.sql If that doesn't fix it, my next guess would be you need to escape the backslash. \i somedir\\script2.sql A: i did try this and its working in windows machine to run a sql file on a specific schema. psql -h localhost -p 5432 -U username -d databasename -v schema=schemaname < e:\Table.sql
{ "language": "en", "url": "https://stackoverflow.com/questions/129445", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "80" }
Q: JavaScript or Java String Subtraction If you are using Java or JavaScript, is there a good way to do something like a String subtraction so that given two strings: org.company.project.component org.company.project.component.sub_component you just get: sub_component I know that I could just write code to walk the string comparing characters, but I was hoping there was a way to do it in a really compact way. Another use-case is to find the diff between the strings: org.company.project.component.diff org.company.project.component.sub_component I actually only want to remove the sections that are identical. A: Depends on precisely what you want. If you're looking for a way to compare strings in the general case -- meaning finding common sub-strings between arbitrary inputs -- then you're looking at something closer to the Levenshtein distance and similar algorithms. However, if all you need is prefix/suffix comparison, this should work: public static String sub(String a, String b) { if (b.startsWith(a)) { return b.subString(a.length()); } if (b.endsWith(a)) { return b.subString(0, b.length() - a.length()); } return ""; } ...or something roughly to that effect. A: String result = "org.company.project.component.sub_component".replace("org.company.project.component","") Should work... EDIT: Apache commons libraries are also great to use As noted below, the StringUtils class does in fact have a method for this: StringUtils.remove() A: At first glance, I thought of RegExp, but adding to the question, you removed that possibility by adding to the start-string ... So you'll have to make a procedure, that takes every character that are equal out of the resulting string, something like this: <script type="text/javascript"> var a = "org.company.project.component.diff"; var b = "org.company.project.component.sub_component"; var i = 0; while(a.charAt(i) == b.charAt(i)){ i++; } alert(b.substring(i)); </script> By the way it doesn't have a meaning to set Java and javascript as equals in any context, a popular way of putting it could be: Java and javascript has four things in common: j - a - v - a !-) A: Can't you just replace the occurrences of the first string in the second with an empty string ? A: If you're just trying to get whatever's after the last dot, I find this method easy in Javascript: var baseString = "org.company.project.component.sub_component"; var splitString = baseString.split("."); var subString = splitString[splitString.length - 1]; subString will contain the value you're looking for. A: function sub(a, b) { return [a, b].join('\x01').match(/^([^\x01]*)[^\x01]*\x01\1(.*)/)[2]; } Though this relies on that the character with code 1 does not appear in any of those strings. A: This is a solution for the Javascript end of the question: String.prototype.contracat = function(string){ var thing = this.valueOf(); for(var i=0; i<string.length;i++){ thing=thing.replace(string.charAt(i),""); } return thing };
{ "language": "en", "url": "https://stackoverflow.com/questions/129451", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: .NET EventHandlers - Generic or no? Every time I start in deep in a C# project, I end up with lots of events that really just need to pass a single item. I stick with the EventHandler/EventArgs practice, but what I like to do is have something like: public delegate void EventHandler<T>(object src, EventArgs<T> args); public class EventArgs<T>: EventArgs { private T item; public EventArgs(T item) { this.item = item; } public T Item { get { return item; } } } Later, I can have my public event EventHandler<Foo> FooChanged; public event EventHandler<Bar> BarChanged; However, it seems that the standard for .NET is to create a new delegate and EventArgs subclass for each type of event. Is there something wrong with my generic approach? EDIT: The reason for this post is that I just re-created this in a new project, and wanted to make sure it was ok. Actually, I was re-creating it as I posted. I found that there is a generic EventHandler<TEventArgs>, so you don't need to create the generic delegate, but you still need the generic EventArgs<T> class, because TEventArgs: EventArgs. Another EDIT: One downside (to me) of the built-in solution is the extra verbosity: public event EventHandler<EventArgs<Foo>> FooChanged; vs. public event EventHandler<Foo> FooChanged; It can be a pain for clients to register for your events though, because the System namespace is imported by default, so they have to manually seek out your namespace, even with a fancy tool like Resharper... Anyone have any ideas pertaining to that? A: To make generic event declaration easier, I created a couple of code snippets for it. To use them: * *Copy the whole snippet. *Paste it in a text file (e.g. in Notepad). *Save the file with a .snippet extension. *Put the .snippet file in your appropriate snippet directory, such as: Visual Studio 2008\Code Snippets\Visual C#\My Code Snippets Here's one that uses a custom EventArgs class with one property: <?xml version="1.0" encoding="utf-8" ?> <CodeSnippets xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet"> <CodeSnippet Format="1.0.0"> <Header> <Title>Generic event with one type/argument.</Title> <Shortcut>ev1Generic</Shortcut> <Description>Code snippet for event handler and On method</Description> <Author>Ryan Lundy</Author> <SnippetTypes> <SnippetType>Expansion</SnippetType> </SnippetTypes> </Header> <Snippet> <Declarations> <Literal> <ID>type</ID> <ToolTip>Type of the property in the EventArgs subclass.</ToolTip> <Default>propertyType</Default> </Literal> <Literal> <ID>argName</ID> <ToolTip>Name of the argument in the EventArgs subclass constructor.</ToolTip> <Default>propertyName</Default> </Literal> <Literal> <ID>propertyName</ID> <ToolTip>Name of the property in the EventArgs subclass.</ToolTip> <Default>PropertyName</Default> </Literal> <Literal> <ID>eventName</ID> <ToolTip>Name of the event</ToolTip> <Default>NameOfEvent</Default> </Literal> </Declarations> <Code Language="CSharp"><![CDATA[public class $eventName$EventArgs : System.EventArgs { public $eventName$EventArgs($type$ $argName$) { this.$propertyName$ = $argName$; } public $type$ $propertyName$ { get; private set; } } public event EventHandler<$eventName$EventArgs> $eventName$; protected virtual void On$eventName$($eventName$EventArgs e) { var handler = $eventName$; if (handler != null) handler(this, e); }]]> </Code> <Imports> <Import> <Namespace>System</Namespace> </Import> </Imports> </Snippet> </CodeSnippet> </CodeSnippets> And here's one that has two properties: <?xml version="1.0" encoding="utf-8" ?> <CodeSnippets xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet"> <CodeSnippet Format="1.0.0"> <Header> <Title>Generic event with two types/arguments.</Title> <Shortcut>ev2Generic</Shortcut> <Description>Code snippet for event handler and On method</Description> <Author>Ryan Lundy</Author> <SnippetTypes> <SnippetType>Expansion</SnippetType> </SnippetTypes> </Header> <Snippet> <Declarations> <Literal> <ID>type1</ID> <ToolTip>Type of the first property in the EventArgs subclass.</ToolTip> <Default>propertyType1</Default> </Literal> <Literal> <ID>arg1Name</ID> <ToolTip>Name of the first argument in the EventArgs subclass constructor.</ToolTip> <Default>property1Name</Default> </Literal> <Literal> <ID>property1Name</ID> <ToolTip>Name of the first property in the EventArgs subclass.</ToolTip> <Default>Property1Name</Default> </Literal> <Literal> <ID>type2</ID> <ToolTip>Type of the second property in the EventArgs subclass.</ToolTip> <Default>propertyType1</Default> </Literal> <Literal> <ID>arg2Name</ID> <ToolTip>Name of the second argument in the EventArgs subclass constructor.</ToolTip> <Default>property1Name</Default> </Literal> <Literal> <ID>property2Name</ID> <ToolTip>Name of the second property in the EventArgs subclass.</ToolTip> <Default>Property2Name</Default> </Literal> <Literal> <ID>eventName</ID> <ToolTip>Name of the event</ToolTip> <Default>NameOfEvent</Default> </Literal> </Declarations> <Code Language="CSharp"> <![CDATA[public class $eventName$EventArgs : System.EventArgs { public $eventName$EventArgs($type1$ $arg1Name$, $type2$ $arg2Name$) { this.$property1Name$ = $arg1Name$; this.$property2Name$ = $arg2Name$; } public $type1$ $property1Name$ { get; private set; } public $type2$ $property2Name$ { get; private set; } } public event EventHandler<$eventName$EventArgs> $eventName$; protected virtual void On$eventName$($eventName$EventArgs e) { var handler = $eventName$; if (handler != null) handler(this, e); }]]> </Code> <Imports> <Import> <Namespace>System</Namespace> </Import> </Imports> </Snippet> </CodeSnippet> </CodeSnippets> You can follow the pattern to create them with as many properties as you like. A: No, I don't think this is the wrong approach. I think it's even recommended in the [fantastic] book Framework Design Guidelines. I do the same thing. A: This is the correct implementation. It has been added to the .NET Framework (mscorlib) since generics first came available (2.0). For more on its usage and implementation see MSDN: http://msdn.microsoft.com/en-us/library/db0etb8x.aspx A: The first time I saw this little pattern, I was using Composite UI Application block, from MS Patterns & Practices group. It doesn't throw any red flag to me ; in fact it is even a smart way of leveraging generics to follow the DRY rule. A: Delegate of the following form has been added since .NET Framework 2.0 public delegate void EventHandler<TArgs>(object sender, TArgs args) where TArgs : EventArgs You approach goes a bit further, since you provide out-of-the-box implementation for EventArgs with single data item, but it lacks several properties of the original idea: * *You cannot add more properties to the event data without changing dependent code. You will have to change the delegate signature to provide more data to the event subscriber. *Your data object is generic, but it is also "anonymous", and while reading the code you will have to decipher the "Item" property from usages. It should be named according to the data it provides. *Using generics this way you can't make parallel hierarchy of EventArgs, when you have hierarchy of underlying (item) types. E.g. EventArgs<BaseType> is not base type for EventArgs<DerivedType>, even if BaseType is base for DerivedType. So, I think it is better to use generic EventHandler<T>, but still have custom EventArgs classes, organized according to the requirements of the data model. With Visual Studio and extensions like ReSharper, it is only a matter of few commands to create new class like that. A: Since .NET 2.0 EventHandler<T> has been implemented. A: You can find Generic EventHandler on MSDN http://msdn.microsoft.com/en-us/library/db0etb8x.aspx I have been using generic EventHandler extensively and was able to prevent so-called "Explosion of Types(Classes)" Project was kept smaller and easier to navigate around. Coming up with a new intuitive a delegate for non-generic EventHandler delegate is painful and overlap with existing types Appending "*EventHandler" to new delegate name does not help much in my opinion A: I do believe that the recent versions of .NET have just such an event handler defined in them. That's a big thumbs up as far as I'm concerned. /EDIT Didn't get the distinction there originally. As long as you are passing back a class that inherits from EventArgs, which you are, I don't see a problem. I would be concerned if you weren't wrapping the resultfor maintainability reasons. I still say it looks good to me. A: Use generic event handler instances Before .NET Framework 2.0, in order to pass custom information to the event handler, a new delegate had to be declared that specified a class derived from the System.EventArgs class. This is no longer true in .NET Framework 2.0, which introduced the System.EventHandler<T>) delegate. This generic delegate allows any class derived from EventArgs to be used with the event handler.
{ "language": "en", "url": "https://stackoverflow.com/questions/129453", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: Performance gains in stored procs for long running transactions I have several long running report type transactions that take 5-10 minutes. Would I see any performance increase by using stored procs? Would it be significant? each query runs once a night. A: Probably not. Stored procs give you the advantage of pre-compiled SQL. If your SQL is invoked infrequently, they this advantage will be pretty worthless. So if you have SQL that is expensive because the queries themselves are expensive, then stored procs will gain you no meaningful performance advantage. If you have queries that are invoked very frequently and which themselves execute quickly, then it's worth having a proc. A: Most likely not. The performance gains from stored procs, if any (depends on your use case) are the kind that are un-noticable in the micro -- only in the macro. Reporting-type queries are ones that aggregate LOTS of data and if that's the case it'll be slow no matter how the execution method. Only indexing and/or other physical data changes can make it faster. See: Are Stored Procedures more efficient, in general, than inline statements on modern RDBMS's? A: The short answer is: no, stored procedures aren't going to improve the performance. For a start, if you are using parameterised queries there is no difference in performance between a stored procedure and inline SQL. The reason is that ALL queries have cached execution plans - not just stored procedures. Have a look at http://weblogs.asp.net/fbouma/archive/2003/11/18/38178.aspx If you aren't parameterising your inline queries and you're just building the query up and inserting the 'parameters' as literals then each query will look different to the database and it will need to pre-compile each one. So in this case, you would be doing yourself a favour by using parameters in your inline SQL. And you should do this anyway from a security perspective, otherwise you are opening yourself up to SQL injection attacks. But anyway the pre-compilation issue is a red herring here. You are talking about long running queries - so long that the pre-compliation is going to be insignificant. So unfortunately, you aren't going to get off easily here. Your solution is going to be to optimise the actual design of your queries, or even to rethink the whole way you are aproaching the task. A: yes, the query plan for stored procs can be optimized and even if it can't procs are preferred over embedded sql "would you see any performance improvement" - the only way to know for certain is to try it in theory, stored procedures pre-parse the sql and store the query plan instead of figuring out each time, so there should be some speedup just from that, however, i doubt it would be significant in a 5-10 minute process if the speed is of concern your best bet is to look at the query plan and see if it can be improved with different query structures and/or adding indices et al if the speed is not of concern, stored procs provide better encapsulation than inline sql A: As others have said, you won't see much performance gain from the stored procedure being pre-compiled. However, if your current transactions have multiple statements, with data going back and forth between the server, then wrapping it in a stored procedure could eliminate some of that back-and-forth, which can be a real performance killer. Look into proper indexing, but also consider the fact that the queries themselves (or the whole process if it consists of multiple steps) might be inefficient. Without seeing your actual code it's hard to say.
{ "language": "en", "url": "https://stackoverflow.com/questions/129494", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I use a stored procedure in log4net ADONetAppender? I am using the ADONetAppender to (try) to log data via a stored procedure (so that I may inject logic into the logging routine). My configuration settings are listed below. Can anybody tell what I'm doing wrong? <appender name="ADONetAppender_SqlServer" type="log4net.Appender.ADONetAppender"> <bufferSize value="1" /> <threshold value="ALL"/> <param name="ConnectionType" value="System.Data.SqlClient.SqlConnection, System.Data, Version=1.0.3300.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" /> <param name="ConnectionString" value="<MyConnectionString>" /> <param name="UseTransactions" value="False" /> <commandText value="dbo.LogDetail_via_Log4Net" /> <commandType value="StoredProcedure" /> <parameter> <parameterName value="@AppLogID"/> <dbType value="String"/> <size value="50" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%property{LoggingSessionId}" /> </layout> </parameter> <parameter> <parameterName value="@CreateUser"/> <dbType value="String"/> <size value="50" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%property{HttpUser}" /> </layout> </parameter> <parameter> <parameterName value="@Message"/> <dbType value="String"/> <size value="8000" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%message" /> </layout> </parameter> <parameter> <parameterName value="@LogLevel"/> <dbType value="String"/> <size value="50"/> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%level" /> </layout> </parameter> </appender> A: Use "AnsiString" as dbType for varchar. "String" for nvarchar. http://msdn.microsoft.com/en-us/library/system.data.dbtype%28v=VS.90%29.aspx A: Thanks to a vigilant DBA, we have solved the problem. Note the size of the "@Message" parameter. log4net is taking a guess at how to convert the type and (I think) converting it to nvarchar even though the column is a varchar. This is a big deal because nvarchar has a max size of 4000 while varchar has a max size of 8000. The DBA saw errors as described in this KB article: http://support.microsoft.com/kb/827366 I changed the size to 4000 and everything works swimingly. Hopefully this will help somebody else avoid the same problem. Cheers! A: </configSections> <log4net> <appender name="AdoNetAppender" type="log4net.Appender.AdoNetAppender"> <bufferSize value="1"/> <connectionType value="System.Data.SqlClient.SqlConnection, System.Data, Version=1.0.5000.0,Culture=neutral, PublicKeyToken=b77a5c561934e089"/> <connectionString value="Data Source=yourservername;initial Catalog=Databasename;User ID=sa;Password=xyz;"/> <commandText value="INSERT INTO Log4Net ([Date], [Thread], [Level], [Logger], [Message], [Exception]) VALUES (@log_date, @thread, @log_level, @logger, @message, @exception)"/> <parameter> <parameterName value="@log_date"/> <dbType value="DateTime"/> <layout type="log4net.Layout.RawTimeStampLayout"/> </parameter> <parameter> <parameterName value="@thread"/> <dbType value="String"/> <size value="255"/> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%thread ip=%property{ip}"/> </layout> </parameter> <parameter> <parameterName value="@log_level"/> <dbType value="String"/> <size value="50"/> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%level"/> </layout> </parameter> <parameter> <parameterName value="@logger"/> <dbType value="String"/> <size value="255"/> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%logger"/> </layout> </parameter> <parameter> <parameterName value="@message"/> <dbType value="String"/> <size value="4000"/> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%message"/> </layout>
{ "language": "en", "url": "https://stackoverflow.com/questions/129498", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How do I wrap text in a UITableViewCell without a custom cell This is on iPhone 0S 2.0. Answers for 2.1 are fine too, though I am unaware of any differences regarding tables. It feels like it should be possible to get text to wrap without creating a custom cell, since a UITableViewCell contains a UILabel by default. I know I can make it work if I create a custom cell, but that's not what I'm trying to achieve - I want to understand why my current approach doesn't work. I've figured out that the label is created on demand (since the cell supports text and image access, so it doesn't create the data view until necessary), so if I do something like this: cell.text = @""; // create the label UILabel* label = (UILabel*)[[cell.contentView subviews] objectAtIndex:0]; then I get a valid label, but setting numberOfLines on that (and lineBreakMode) doesn't work - I still get single line text. There is plenty of height in the UILabel for the text to display - I'm just returning a large value for the height in heightForRowAtIndexPath. A: A brief comment / answer to record my experience when I had the same problem. Despite using the code examples, the table view cell height was adjusting, but the label inside the cell was still not adjusting correctly - solution was that I was loading my cell from a custom NIB file, which happens after the cell height in adjusted. And I had my settings inside the NIB file to not wrap text, and only have 1 line for the label; the NIB file settings were overriding the settings I adjusted inside the code. The lesson I took was to make sure to always bear in mind what the state of the objects are at each point in time - they might not have been created yet! ... hth someone down the line. A: If we are to add only text in UITableView cell, we need only two delegates to work with (no need to add extra UILabels) 1) cellForRowAtIndexPath 2) heightForRowAtIndexPath This solution worked for me:- -(UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath*)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; } cell.textLabel.font = [UIFont fontWithName:@"Helvetica" size:16]; cell.textLabel.lineBreakMode = UILineBreakModeWordWrap; cell.textLabel.numberOfLines = 0; [cell setSelectionStyle:UITableViewCellSelectionStyleGray]; cell.textLabel.text = [mutArr objectAtIndex:indexPath.section]; NSLog(@"%@",cell.textLabel.text); cell.accessoryView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"arrow.png" ]]; return cell; } - (CGFloat)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath { CGSize labelSize = CGSizeMake(200.0, 20.0); NSString *strTemp = [mutArr objectAtIndex:indexPath.section]; if ([strTemp length] > 0) labelSize = [strTemp sizeWithFont: [UIFont boldSystemFontOfSize: 14.0] constrainedToSize: CGSizeMake(labelSize.width, 1000) lineBreakMode: UILineBreakModeWordWrap]; return (labelSize.height + 10); } Here the string mutArr is a mutable array from which i am getting my data. EDIT :- Here is the array which I took. mutArr= [[NSMutableArray alloc] init]; [mutArr addObject:@"HEMAN"]; [mutArr addObject:@"SUPERMAN"]; [mutArr addObject:@"Is SUPERMAN powerful than HEMAN"]; [mutArr addObject:@"Well, if HEMAN is weaker than SUPERMAN, both are friends and we will never get to know who is more powerful than whom because they will never have a fight among them"]; [mutArr addObject:@"Where are BATMAN and SPIDERMAN"]; A: Now the tableviews can have self-sizing cells. Set the table view up as follows tableView.estimatedRowHeight = 85.0 //use an appropriate estimate tableView.rowHeight = UITableViewAutomaticDimension Apple Reference A: Here is a simpler way, and it works for me: Inside your cellForRowAtIndexPath: function. The first time you create your cell: UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; cell.textLabel.lineBreakMode = UILineBreakModeWordWrap; cell.textLabel.numberOfLines = 0; cell.textLabel.font = [UIFont fontWithName:@"Helvetica" size:17.0]; } You'll notice that I set the number of lines for the label to 0. This lets it use as many lines as it needs. The next part is to specify how large your UITableViewCell will be, so do that in your heightForRowAtIndexPath function: - (CGFloat)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath { NSString *cellText = @"Go get some text for your cell."; UIFont *cellFont = [UIFont fontWithName:@"Helvetica" size:17.0]; CGSize constraintSize = CGSizeMake(280.0f, MAXFLOAT); CGSize labelSize = [cellText sizeWithFont:cellFont constrainedToSize:constraintSize lineBreakMode:UILineBreakModeWordWrap]; return labelSize.height + 20; } I added 20 to my returned cell height because I like a little buffer around my text. A: Updated Tim Rupe's answer for iOS7: UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] ; cell.textLabel.lineBreakMode = NSLineBreakByWordWrapping; cell.textLabel.numberOfLines = 0; cell.textLabel.font = [UIFont fontWithName:@"Helvetica" size:17.0]; } - (CGFloat)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath { NSString *cellText = @"Go get some text for your cell."; UIFont *cellFont = [UIFont fontWithName:@"Helvetica" size:17.0]; NSAttributedString *attributedText = [[NSAttributedString alloc] initWithString:cellText attributes:@ { NSFontAttributeName: cellFont }]; CGRect rect = [attributedText boundingRectWithSize:CGSizeMake(tableView.bounds.size.width, CGFLOAT_MAX) options:NSStringDrawingUsesLineFragmentOrigin context:nil]; return rect.size.height + 20; } A: I use the following solutions. The data is provided separately in a member: -(NSString *)getHeaderData:(int)theSection { ... return rowText; } The handling can be easily done in cellForRowAtIndexPath. Define the cell / define the font and assign these values to the result "cell". Note that the numberoflines is set to "0", which means take what is needed. - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; UIFont *cellFont = [UIFont fontWithName:@"Verdana" size:12.0]; cell.textLabel.text= [self getRowData:indexPath.section]; cell.textLabel.font = cellFont; cell.textLabel.numberOfLines=0; return cell; } In heightForRowAtIndexPath, I calculate the heights of the wrapped text. The boding size shall be related to the width of your cell. For iPad this shall be 1024. For iPhone en iPod 320. - (CGFloat)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath { UIFont *cellFont = [UIFont fontWithName:@"Verdana" size:12.0]; CGSize boundingSize = CGSizeMake(1024, CGFLOAT_MAX); CGSize requiredSize = [[self getRowData:indexPath.section] sizeWithFont:cellFont constrainedToSize:boundingSize lineBreakMode:UILineBreakModeWordWrap]; return requiredSize.height; } A: I found this to be quite simple and straightForward : [self.tableView setRowHeight:whatEvereight.0f]; for e.g. : [self.tableView setRowHeight:80.0f]; This may or may not be the best / standard approach to do so, but it worked in my case. A: Try my code in swift . This code will work for normal UILabels also. extension UILabel { func lblFunction() { //You can pass here all UILabel properties like Font, colour etc.... numberOfLines = 0 lineBreakMode = .byWordWrapping//If you want word wraping lineBreakMode = .byCharWrapping//If you want character wraping } } Now call simply like this cell.textLabel.lblFunction()//Replace your label name A: I think this is a better and shorter solution. Just format the UILabel (textLabel) of the cell to auto calculate for the height by specifying sizeToFit and everything should be fine. - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; } // Configure the cell... cell.textLabel.text = @"Whatever text you want to put here is ok"; cell.textLabel.lineBreakMode = UILineBreakModeWordWrap; cell.textLabel.numberOfLines = 0; [cell.textLabel sizeToFit]; return cell; } A: I don't think you can manipulate a base UITableViewCell's private UILabel to do this. You could add a new UILabel to the cell yourself and use numberOfLines with sizeToFit to size it appropriately. Something like: UILabel* label = [[UILabel alloc] initWithFrame:cell.frame]; label.numberOfLines = <...an appriate number of lines...> label.text = <...your text...> [label sizeToFit]; [cell addSubview:label]; [label release];
{ "language": "en", "url": "https://stackoverflow.com/questions/129502", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "151" }
Q: How do you test that a Python function throws an exception? How does one write a unit test that fails only if a function doesn't throw an expected exception? A: Use TestCase.assertRaises (or TestCase.failUnlessRaises) from the unittest module, for example: import mymod class MyTestCase(unittest.TestCase): def test1(self): self.assertRaises(SomeCoolException, mymod.myfunc) A: There are a lot of answers here. The code shows how we can create an Exception, how we can use that exception in our methods, and finally, how you can verify in a unit test, the correct exceptions being raised. import unittest class DeviceException(Exception): def __init__(self, msg, code): self.msg = msg self.code = code def __str__(self): return repr("Error {}: {}".format(self.code, self.msg)) class MyDevice(object): def __init__(self): self.name = 'DefaultName' def setParameter(self, param, value): if isinstance(value, str): setattr(self, param , value) else: raise DeviceException('Incorrect type of argument passed. Name expects a string', 100001) def getParameter(self, param): return getattr(self, param) class TestMyDevice(unittest.TestCase): def setUp(self): self.dev1 = MyDevice() def tearDown(self): del self.dev1 def test_name(self): """ Test for valid input for name parameter """ self.dev1.setParameter('name', 'MyDevice') name = self.dev1.getParameter('name') self.assertEqual(name, 'MyDevice') def test_invalid_name(self): """ Test to check if error is raised if invalid type of input is provided """ self.assertRaises(DeviceException, self.dev1.setParameter, 'name', 1234) def test_exception_message(self): """ Test to check if correct exception message and code is raised when incorrect value is passed """ with self.assertRaises(DeviceException) as cm: self.dev1.setParameter('name', 1234) self.assertEqual(cm.exception.msg, 'Incorrect type of argument passed. Name expects a string', 'mismatch in expected error message') self.assertEqual(cm.exception.code, 100001, 'mismatch in expected error code') if __name__ == '__main__': unittest.main() A: Your code should follow this pattern (this is a unittest module style test): def test_afunction_throws_exception(self): try: afunction() except ExpectedException: pass except Exception: self.fail('unexpected exception raised') else: self.fail('ExpectedException not raised') On Python < 2.7 this construct is useful for checking for specific values in the expected exception. The unittest function assertRaises only checks if an exception was raised. A: Since Python 2.7 you can use context manager to get ahold of the actual Exception object thrown: import unittest def broken_function(): raise Exception('This is broken') class MyTestCase(unittest.TestCase): def test(self): with self.assertRaises(Exception) as context: broken_function() self.assertTrue('This is broken' in context.exception) if __name__ == '__main__': unittest.main() assertRaises In Python 3.5, you have to wrap context.exception in str, otherwise you'll get a TypeError self.assertTrue('This is broken' in str(context.exception)) A: I just discovered that the Mock library provides an assertRaisesWithMessage() method (in its unittest.TestCase subclass), which will check not only that the expected exception is raised, but also that it is raised with the expected message: from testcase import TestCase import mymod class MyTestCase(TestCase): def test1(self): self.assertRaisesWithMessage(SomeCoolException, 'expected message', mymod.myfunc) A: You can use assertRaises from the unittest module: import unittest class TestClass(): def raises_exception(self): raise Exception("test") class MyTestCase(unittest.TestCase): def test_if_method_raises_correct_exception(self): test_class = TestClass() # Note that you don’t use () when passing the method to assertRaises self.assertRaises(Exception, test_class.raises_exception) A: There are 4 options (you'll find full example in the end): assertRaises with context manager def test_raises(self): with self.assertRaises(RuntimeError): raise RuntimeError() If you want to check the exception message (see the "assertRaisesRegex with context manager" option below to check only part of it): def test_raises(self): with self.assertRaises(RuntimeError) as error: raise RuntimeError("your exception message") self.assertEqual(str(error.exception), "your exception message") assertRaises one-liner Pay attention: instead of function call, here you use your function as callable (without round brackets). def test_raises(self): self.assertRaises(RuntimeError, your_function) assertRaisesRegex with context manager Second parameter is regex expression and is mandatory. Handy when you want check only part of the exception message. def test_raises_regex(self): with self.assertRaisesRegex(RuntimeError, r'.* exception message'): raise RuntimeError('your exception message') assertRaisesRegex one-liner Second parameter is regex expression and is mandatory. Handy when you want check only part of the exception message. Pay attention: instead of function call, here you use your function as callable (without round brackets). def test_raises_regex(self): self.assertRaisesRegex(RuntimeError, r'.* exception message', your_function) Full code example: import unittest def your_function(): raise RuntimeError('your exception message') class YourTestCase(unittest.TestCase): def test_1_raises_context_manager(self): with self.assertRaises(RuntimeError): your_function() def test_1b_raises_context_manager_and_error_message(self): with self.assertRaises(RuntimeError) as error: your_function() self.assertEqual(str(error.exception), "your exception message") def test_2_raises_oneliner(self): self.assertRaises(RuntimeError, your_function) def test_3_raises_regex_context_manager(self): with self.assertRaisesRegex(RuntimeError, r'.* exception message'): your_function() def test_4_raises_regex_oneliner(self): self.assertRaisesRegex(RuntimeError, r'.* exception message', your_function) if __name__ == '__main__': unittest.main() Although it's up to developer which style to follow I prefer both methods using context manager. A: The code in my previous answer can be simplified to: def test_afunction_throws_exception(self): self.assertRaises(ExpectedException, afunction) And if a function takes arguments, just pass them into assertRaises like this: def test_afunction_throws_exception(self): self.assertRaises(ExpectedException, afunction, arg1, arg2) A: From http://www.lengrand.fr/2011/12/pythonunittest-assertraises-raises-error/: First, here is the corresponding (still dum :p) function in file dum_function.py: def square_value(a): """ Returns the square value of a. """ try: out = a*a except TypeError: raise TypeError("Input should be a string:") return out Here is the test to be performed (only this test is inserted): import dum_function as df # Import function module import unittest class Test(unittest.TestCase): """ The class inherits from unittest """ def setUp(self): """ This method is called before each test """ self.false_int = "A" def tearDown(self): """ This method is called after each test """ pass #--- ## TESTS def test_square_value(self): # assertRaises(excClass, callableObj) prototype self.assertRaises(TypeError, df.square_value(self.false_int)) if __name__ == "__main__": unittest.main() We are now ready to test our function! Here is what happens when trying to run the test: ====================================================================== ERROR: test_square_value (__main__.Test) ---------------------------------------------------------------------- Traceback (most recent call last): File "test_dum_function.py", line 22, in test_square_value self.assertRaises(TypeError, df.square_value(self.false_int)) File "/home/jlengrand/Desktop/function.py", line 8, in square_value raise TypeError("Input should be a string:") TypeError: Input should be a string: ---------------------------------------------------------------------- Ran 1 test in 0.000s FAILED (errors=1) The TypeError is actually raised, and generates a test failure. The problem is that this is exactly the behavior we wanted :s. To avoid this error, simply run the function using lambda in the test call: self.assertRaises(TypeError, lambda: df.square_value(self.false_int)) The final output: ---------------------------------------------------------------------- Ran 1 test in 0.000s OK Perfect! ... and for me is perfect too!! Thanks a lot, Mr. Julien Lengrand-Lambert. This test assert actually returns a false positive. That happens because the lambda inside the 'assertRaises' is the unit that raises type error and not the tested function. A: As I haven't seen any detailed explanation on how to check if we got a specific exception among a list of accepted one using context manager, or other exception details I will add mine (checked on Python 3.8). If I just want to check that function is raising for instance TypeError, I would write: with self.assertRaises(TypeError): function_raising_some_exception(parameters) If I want to check that function is raising either TypeError or IndexError, I would write: with self.assertRaises((TypeError,IndexError)): function_raising_some_exception(parameters) And if I want even more details about the Exception raised I could catch it in a context like this: # Here I catch any exception with self.assertRaises(Exception) as e: function_raising_some_exception(parameters) # Here I check actual exception type (but I could # check anything else about that specific exception, # like it's actual message or values stored in the exception) self.assertTrue(type(e.exception) in [TypeError,MatrixIsSingular]) A: For those on Django, you can use context manager to run the faulty function and assert it raises the exception with a certain message using assertRaisesMessage with self.assertRaisesMessage(SomeException,'Some error message e.g 404 Not Found'): faulty_funtion() A: If you are using pytest you can use pytest.raises(Exception): Example: def test_div_zero(): with pytest.raises(ZeroDivisionError): 1/0 And the result: $ py.test ================= test session starts ================= platform linux2 -- Python 2.6.6 -- py-1.4.20 -- pytest-2.5.2 -- /usr/bin/python collected 1 items tests/test_div_zero.py:6: test_div_zero PASSED Or you can build your own contextmanager to check if the exception was raised. import contextlib @contextlib.contextmanager def raises(exception): try: yield except exception as e: assert True else: assert False And then you can use raises like this: with raises(Exception): print "Hola" # Calls assert False with raises(Exception): raise Exception # Calls assert True A: For await/async aiounittest there is a slightly different pattern: https://aiounittest.readthedocs.io/en/latest/asynctestcase.html#aiounittest.AsyncTestCase async def test_await_async_fail(self): with self.assertRaises(Exception) as e: await async_one() A: This will raise TypeError if setting stock_id to an Integer in this class will throw the error, the test will pass if this happens and fails otherwise def set_string(prop, value): if not isinstance(value, str): raise TypeError("i told you i take strings only ") return value class BuyVolume(ndb.Model): stock_id = ndb.StringProperty(validator=set_string) from pytest import raises buy_volume_instance: BuyVolume = BuyVolume() with raises(TypeError): buy_volume_instance.stock_id = 25 A: If you are using Python 3, in order to assert an exception along with its message, you can use assertRaises in context manager and pass the message as a msg keyword argument like so: import unittest def your_function(): raise RuntimeError('your exception message') class YourTestCase(unittest.TestCase): def test(self): with self.assertRaises(RuntimeError, msg='your exception message'): your_function() if __name__ == '__main__': unittest.main() A: How do you test that a Python function throws an exception? How does one write a test that fails only if a function doesn't throw an expected exception? Short Answer: Use the self.assertRaises method as a context manager: def test_1_cannot_add_int_and_str(self): with self.assertRaises(TypeError): 1 + '1' Demonstration The best practice approach is fairly easy to demonstrate in a Python shell. The unittest library In Python 2.7 or 3: import unittest In Python 2.6, you can install a backport of 2.7's unittest library, called unittest2, and just alias that as unittest: import unittest2 as unittest Example tests Now, paste into your Python shell the following test of Python's type-safety: class MyTestCase(unittest.TestCase): def test_1_cannot_add_int_and_str(self): with self.assertRaises(TypeError): 1 + '1' def test_2_cannot_add_int_and_str(self): import operator self.assertRaises(TypeError, operator.add, 1, '1') Test one uses assertRaises as a context manager, which ensures that the error is properly caught and cleaned up, while recorded. We could also write it without the context manager, see test two. The first argument would be the error type you expect to raise, the second argument, the function you are testing, and the remaining args and keyword args will be passed to that function. I think it's far more simple, readable, and maintainable to just to use the context manager. Running the tests To run the tests: unittest.main(exit=False) In Python 2.6, you'll probably need the following: unittest.TextTestRunner().run(unittest.TestLoader().loadTestsFromTestCase(MyTestCase)) And your terminal should output the following: .. ---------------------------------------------------------------------- Ran 2 tests in 0.007s OK <unittest2.runner.TextTestResult run=2 errors=0 failures=0> And we see that as we expect, attempting to add a 1 and a '1' result in a TypeError. For more verbose output, try this: unittest.TextTestRunner(verbosity=2).run(unittest.TestLoader().loadTestsFromTestCase(MyTestCase)) A: I use doctest[1] almost everywhere because I like the fact that I document and test my functions at the same time. Have a look at this code: def throw_up(something, gowrong=False): """ >>> throw_up('Fish n Chips') Traceback (most recent call last): ... Exception: Fish n Chips >>> throw_up('Fish n Chips', gowrong=True) 'I feel fine!' """ if gowrong: return "I feel fine!" raise Exception(something) if __name__ == '__main__': import doctest doctest.testmod() If you put this example in a module and run it from the command line both test cases are evaluated and checked. [1] Python documentation: 23.2 doctest -- Test interactive Python examples A: While all the answers are perfectly fine, I was looking for a way to test if a function raised an exception without relying on unit testing frameworks and having to write test classes. I ended up writing the following: def assert_error(e, x): try: e(x) except: return raise AssertionError() def failing_function(x): raise ValueError() def dummy_function(x): return x if __name__=="__main__": assert_error(failing_function, 0) assert_error(dummy_function, 0) And it fails on the right line: Traceback (most recent call last): File "assert_error.py", line 16, in <module> assert_error(dummy_function, 0) File "assert_error.py", line 6, in assert_error raise AssertionError() AssertionError A: Unit testing with unittest would be preferred, but if you would like a quick fix, we can catch the exception, assign it to a variable, and see if that variable is an instance of that exception class. Lets assume our bad function throws a ValueError. try: bad_function() except ValueError as e: assert isinstance(e, ValueError)
{ "language": "en", "url": "https://stackoverflow.com/questions/129507", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1160" }
Q: Is it a bad idea to reload routes dynamically in Rails? I have an application I'm writing where I'm allowing the administrators to add aliases for pages, categories, etc, and I would like to use a different controller/action depending on the alias (without redirecting, and I've found that render doesn't actually call the method. I just renders the template). I have tried a catch all route, but I'm not crazy about causing and catching a DoubleRender exception that gets thrown everytime. The solution for this I've come up with is dynamically generated routes when the server is started, and using callbacks from the Alias model to reload routes when an alias is created/updated/destroyed. Here is the code from my routes.rb: Alias.find(:all).each do |alias_to_add| map.connect alias_to_add.name, :controller => alias_to_add.page_type.controller, :action => alias_to_add.page_type.action, :navigation_node_id => alias_to_add.navigation_node.id end I am using callbacks in my Alias model as follows: after_save :rebuild_routes after_destroy :rebuild_routes def rebuild_routes ActionController::Routing::Routes.reload! end Is this against Rails best practices? Is there a better solution? A: Quick Solution Have a catch-all route at the bottom of routes.rb. Implement any alias lookup logic you want in the action that route routes you to. In my implementation, I have a table which maps defined URLs to a controller, action, and parameter hash. I just pluck them out of the database, then call the appropriate action and then try to render the default template for the action. If the action already rendered something, that throws a DoubleRenderError, which I catch and ignore. You can extend this technique to be as complicated as you want, although as it gets more complicated it makes more sense to implement it by tweaking either your routes or the Rails default routing logic rather than by essentially reimplementing all the routing logic yourself. If you don't find an alias, you can throw the 404 or 500 error as you deem appropriate. Stuff to keep in mind: Caching: Not knowing your URLs a priori can make page caching an absolute bear. Remember, it caches based on the URI supplied, NOT on the url_for (:action_you_actually_executed). This means that if you alias /foo_action/bar_method to /some-wonderful-alias you'll get some-wonderful-alias.html living in your cache directory. And when you try to sweep foo's bar, you won't sweep that file unless you specify it explicitly. Fault Tolerance: Check to make sure someone doesn't accidentally alias over an existing route. You can do this trivially by forcing all aliases into a "directory" which is known to not otherwise be routable (in which case, the alias being textually unique is enough to make sure they never collide), but that isn't a maximally desirable solution for a few of the applications I can think of of this. A: First, as other have suggested, create a catch-all route at the bottom of routes.rb: map.connect ':name', :controller => 'aliases', :action => 'show' Then, in AliasesController, you can use render_component to render the aliased action: class AliasesController < ApplicationController def show if alias = Alias.find_by_name(params[:name]) render_component(:controller => alias.page_type.controller, :action => alias.page_type.action, :navigation_node_id => alias.navigation_node.id) else render :file => "#{RAILS_ROOT}/public/404.html", :status => :not_found end end end A: Ben, I find the method you're already using to be the best. Using Rails 3, you'd have to change the code a bit, to: MyNewApplication::Application.reload_routes! That's all. A: I'm not sure I fully understand the question, but you could use method_missing in your controllers and then lookup the alias, maybe like this: class MyController def method_missing(sym, *args) aliased = Alias.find_by_action_name(sym) # sanity check here in case no alias self.send( aliased.real_action_name ) # sanity check here in case the real action calls a different render explicitly render :action => aliased.real_action_name end def normal_action @thing = Things.find(params[:id]) end end If you wanted to optimize that, you could put a define_method in the method_missing, so it would only be 'missing' on the first invocation, and would be a normal method from then on.
{ "language": "en", "url": "https://stackoverflow.com/questions/129510", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Overflowed (i.e. querystring) RouteData back out of the controller into the view Anyone understand why the following doesn't work? What I want to do is copy current route data plus whatever I add via an anonymous object into new routedata when forming new links on the view. For example if I have the parameter "page" as a non route path (i.e. so it overflows the route path and its injected into the method parameter if a querystring is present) e.g. public ActionResult ChangePage(int? page) { } and I want the View to know the updated page when building links using helpers. I thought the best way to do this is with the following: public ActionResult ChangePage(int? page) { if(page.HasValue) RouteData.Values.Add("Page", page); ViewData.Model = GetData(page.HasValue ? page.Value : 1); } Then in the view markup I can render my next, preview, sort, showmore (any links relevant) with this overload: public static class Helpers { public static string ActionLinkFromRouteData(this HtmlHelper helper, string linkText, string actionName, object values) { RouteValueDictionary routeValueDictionary = new RouteValueDictionary(); foreach(var routeValue in helper.ViewContext.RouteData.Values) { if(routeValue.Key != "controller" && routeValue.Key != "action") { routeValueDictionary[routeValue.Key] = routeValue; } } foreach(var prop in GetProperties(values)) { routeValueDictionary[prop.Name] = prop.Value; } return helper.ActionLink(linkText, actionName, routeValueDictionary; } private static IEnumerable<PropertyValue> GetProperties(object o) { if (o != null) { PropertyDescriptorCollection props = TypeDescriptor.GetProperties(o); foreach (PropertyDescriptor prop in props) { object val = prop.GetValue(o); if (val != null) { yield return new PropertyValue { Name = prop.Name, Value = val }; } } } } private sealed class PropertyValue { public string Name { get; set; } public object Value { get; set; } } } I have posted the code only to illustrate the point. This doesn't work and doesn't feel right... Pointers? A: Pass the page info into ViewData? PagedResultsInfo (or something) sounds like a class you could write too... we do.
{ "language": "en", "url": "https://stackoverflow.com/questions/129523", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Anyone know of a solid .NET Framework 2.0 installer script for Inno Setup? I've spent a good part of the day searching, writing and finally scrapping a script that I can use with my Inno Setup install script that will download and install the appropriate .NET 2.0 Framework if needed. There are definitely a number of examples out there, but they: * *Want to install Internet Explorer if needed which I wouldn't dare to in an automated way *Only handle x86 .NET distributions, no x64 and IA64 support *Don't install the appropriate language pack when needed -- a tough problem (when I saw there were different language packs for different x86/x64/language combos I threw in the towel) *Don't handle getting the .NET 2.0 SP1 (maybe Windows Update will handle that once 2.0 is installed?) This seems like such a common problem that someone must have solved it. All I found though were 20 different posts all pointing to the same two or three code snippets. Insight welcomed :) A: .NET Framework 1.1/2.0/3.5 Installer for InnoSetup A: I have recently been looking into this issue but without the same requirements that you have. I haven't seen a script that does what you want but have you considered instead checking if .NET 2.0 is installed and if not then prompt them to download it. You can open a URL in the default browser and get the user to attempt the install again once the framework has been installed. This is not an ideal situation from a user perspective but i think going with what your planning you will have to write some complex stuff to handle the different language constraints just to get it working. Just my 2 cents.
{ "language": "en", "url": "https://stackoverflow.com/questions/129542", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What is the best way of handling non-validating SSL certificates in C# I'm using the following code to make sure all certificates pass, even invalid ones, but I would like to know if there is a better way, as this event gets called globally and I only want the certificate to pass for a certain HTTP call and not for any others that are happening asynchronously. // This delegate makes sure that non-validating SSL certificates are passed ServicePointManager.ServerCertificateValidationCallback = delegate(object certsender, X509Certificate cert, X509Chain chain, System.Net.Security.SslPolicyErrors error) { return true; }; The code above is just an example of ignoring any non-validation on the certificate. The problem that I'm having is that it is a global event. I can't see which session the event is happening for. I might have a couple of http requests going through and I want to ask the user for an action for each request. A: Well, you could actually bother to check some of those parameters. ;) For instance, if you have a self signed certificate, then only let error == SslPolicyErrors.RemoteCertificateChainError through. You could also check the issuer, name, etc. on the certificate itself for additional security. A: What about the certsender argument? Does it contain anything sensible so that you can tell what connection the callback is happening for? I checked the .NET API but it doesn't say what the argument is supposed to contain...
{ "language": "en", "url": "https://stackoverflow.com/questions/129544", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What has your QA/tester team said or done for the development team that made your day (as a developer) There are lots of questions on how to improve communication between teams. One way to start is to identify what one team actually does that the other team really values and do more of that. For example. Our QA team provided a VM for us with: * *The latest release of our server-based commercial software installed and configured (not an easy task in that an installation on-site takes at least 2 days) *A database backup of the configured system including sample data *an auto-install and configure application that mostly works. (with 12 install packages for the components needed, this is a big time saver) While we still do most of our testing on our own desktops, this allows us to have a relatively clean environment we can run locally. What has your QA team done for you lately? Conversely, what have you done for your QA team? A: "It sucks less." That truly made my day. A: A good friend of mine who used to be in our QA department put together a bunch of amazing scripts with AutoIt. To me they were like gold, he would find issues, write me a script, email me the executable and I'd have a way to reproduce problems in a snap. His scripts helped me track down a memory leak that I had been (unsuccessfully) trying to track down for months. Automated testing is a Good Thing. Oh - he has since been promoted to Software Engineer and works on my team now. A: I'm surprised that nobody has said "My QA team found an important bug before my code got to the customer." A: "It crashed!" - the bug we were hunting for something like several months was reproduced.
{ "language": "en", "url": "https://stackoverflow.com/questions/129551", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Sql server real datatype, what is the C# equivalent? What is the C# equivalent to the sql server 2005 real type? A: it is a Single see here for more info on SQL Server to .Net DataTypes A: Double can be used as the .NET Equivalent Datatype for Real Datatype of SQL Server Double gets the exact value with out Rounding A: The answer is Single. If you use double and your SQL field is type real it will error out. I tested this myself and confirmed. A: Its Equivalent is Single. Single is A floating point number within the range of -3.40E +38 through 3.40E +38. Here is the latest from MSDN describes all SqlDbType and C# Equivalents A: Single is not the correct answer as it rounds the decimal part.. For instance 2.0799999 will be converted to 2.08. If there are no constraints regarding the rounding then it should be good. A: in my project (acces -> firebird and ms sql -> c#) is real defined like single precission float point number...so I used float and everything is OK A: the answer is Single or float. (depending on style.) this is like the difference between String and string [source: ReSharper code suggestion "use type keyword" when using Single. it suggested using float.
{ "language": "en", "url": "https://stackoverflow.com/questions/129560", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "51" }
Q: Profiling SQL Server and/or ASP.NET How would one go about profiling a few queries that are being run from an ASP.NET application? There is some software where I work that runs extremely slow because of the database (I think). The tables have indexes but it still drags because it's working with so much data. How can I profile to see where I can make a few minor improvements that will hopefully lead to larger speed improvements? Edit: I'd like to add that the webserver likes to timeout during these long queries. A: Sql Server has some excellent tools to help you with this situation. These tools are built into Management Studio (which used to be called Enterprise Manager + Query Analyzer). Use SQL Profiler to show you the actual queries coming from the web application. Copy each of the problem queries out (the ones that eat up lots of CPU time or IO). Run the queries with "Display Actual Execution Plan". Hopefully you will see some obvious index that is missing. You can also run the tuning wizard (the button is right next to "display actual execution plan". It will run the query and make suggestions. Usually, if you already have indexes and queries are still running slow, you will need to re-write the queries in a different way. Keeping all of your queries in stored procedures makes this job much easier. A: To profile SQL Server, use the SQL Profiler. And you can use ANTS Profiler from Red Gate to profile your code. A: Another .NET profiler which plays nicely with ASP.NET is dotTrace. I have personally used it and found lots of bottlenecks in my code. A: I believe you have the answer you need to profile the queries. However, this is the easiest part of performance tuning. Once you know it is the queries and not the network or the app, how do you find and fix the problem? Performance tuning is a complex thing. But there some places to look at first. You say you are returning lots of data? Are you returning more data than you need? Are you really returning only the columns and records you need? Returning 100 columns by using select * can be much slower than returning the 5 columns you are actually using. Are your indexes and statistics up-to-date? Look up how to update statisistcs and re-index in BOL if you haven't done this in a awhile. Do you have indexes on all the join fields? How about the fields in the where clause. Have you used a cursor? Have you used subqueries? How about union-if you are using it can it be changed to union all? Are your queries sargable (google if unfamiliar with the term.) Are you using distinct when you could use group by? Are you getting locks? There are many other things to look at these are just a starting place. A: If there is a particular query or stored procedure I want to tune, I have found turning on statistics before the query to be very useful: SET STATISTICS TIME ON SET STATISTICS IO ON When you turn on statistics in Query Analyzer, the statistics are shown in the Messages tab of the Results pane. IO statistics have been particularly useful for me, because it lets me know if I might need an index. If I see a high read count from the IO statistics, I might try adding different indexes to the affected tables. As I try an index, I run the query again to see if the read count has gone down. After a few iterations, I can usually find the best index(es) for the tables involved. Here are links to MSDN for these statistics commands: SET STATISTICS TIME SET STATISTICS IO
{ "language": "en", "url": "https://stackoverflow.com/questions/129605", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What is the difference between my and local in Perl? I am seeing both of them used in this script I am trying to debug and the literature is just not clear. Can someone demystify this for me? A: Well Google really works for you on this one: http://www.perlmonks.org/?node_id=94007 From the link: Quick summary: 'my' creates a new variable, 'local' temporarily amends the value of a variable. ie, 'local' temporarily changes the value of the variable, but only within the scope it exists in. Generally use my, it's faster and doesn't do anything kind of weird. A: From man perlsub: Unlike dynamic variables created by the local operator, lexical variables declared with my are totally hidden from the outside world, including any called subroutines. So, oversimplifying, my makes your variable visible only where it's declared. local makes it visible down the call stack too. You will usually want to use my instead of local. A: The short answer is that my marks a variable as private in a lexical scope, and local marks a variable as private in a dynamic scope. It's easier to understand my, since that creates a local variable in the usual sense. There is a new variable created and it's accessible only within the enclosing lexical block, which is usually marked by curly braces. There are some exceptions to the curly-brace rule, such as: foreach my $x (@foo) { print "$x\n"; } But that's just Perl doing what you mean. Normally you have something like this: sub Foo { my $x = shift; print "$x\n"; } In that case, $x is private to the subroutine and its scope is enclosed by the curly braces. The thing to note, and this is the contrast to local, is that the scope of a my variable is defined with respect to your code as it is written in the file. It's a compile-time phenomenon. To understand local, you need to think in terms of the calling stack of your program as it is running. When a variable is local, it is redefined from the point at which the local statement executes for everything below that on the stack, until you return back up the stack to the caller of the block containing the local. This can be confusing at first, so consider the following example. sub foo { print "$x\n"; } sub bar { local $x; $x = 2; foo(); } $x = 1; foo(); # prints '1' bar(); # prints '2' because $x was localed in bar foo(); # prints '1' again because local from foo is no longer in effect When foo is called the first time, it sees the global value of $x which is 1. When bar is called and local $x runs, that redefines the global $x on the stack. Now when foo is called from bar, it sees the new value of 2 for $x. So far that isn't very special, because the same thing would have happened without the call to local. The magic is that when bar returns we exit the dynamic scope created by local $x and the previous global $x comes back into scope. So for the final call of foo, $x is 1. You will almost always want to use my, since that gives you the local variable you're looking for. Once in a blue moon, local is really handy to do cool things. A: Your confusion is understandable. Lexical scoping is fairly easy to understand but dynamic scoping is an unusual concept. The situation is made worse by the names my and local being somewhat inaccurate (or at least unintuitive) for historical reasons. my declares a lexical variable -- one that is visible from the point of declaration until the end of the enclosing block (or file). It is completely independent from any other variables with the same name in the rest of the program. It is private to that block. local, on the other hand, declares a temporary change to the value of a global variable. The change ends at the end of the enclosing scope, but the variable -- being global -- is visible anywhere in the program. As a rule of thumb, use my to declare your own variables and local to control the impact of changes to Perl's built-in variables. For a more thorough description see Mark Jason Dominus' article Coping with Scoping. A: local is an older method of localization, from the times when Perl had only dynamic scoping. Lexical scoping is much more natural for the programmer and much safer in many situations. my variables belong to the scope (block, package, or file) in which they are declared. local variables instead actually belong to a global namespace. If you refer to a variable $x with local, you are actually referring to $main::x, which is a global variable. Contrary to what it's name implies, all local does is push a new value onto a stack of values for $main::x until the end of this block, at which time the old value will be restored. That's a useful feature in and of itself, but it's not a good way to have local variables for a host of reasons (think what happens when you have threads! and think what happens when you call a routine that genuinely wants to use a global that you have localized!). However, it was the only way to have variables that looked like local variables back in the bad old days before Perl 5. We're still stuck with it. A: Dynamic Scoping. It is a neat concept. Many people don't use it, or understand it. Basically think of my as creating and anchoring a variable to one block of {}, A.K.A. scope. my $foo if (true); # $foo lives and dies within the if statement. So a my variable is what you are used to. whereas with dynamic scoping $var can be declared anywhere and used anywhere. So with local you basically suspend the use of that global variable, and use a "local value" to work with it. So local creates a temporary scope for a temporary variable. $var = 4; print $var, "\n"; &hello; print $var, "\n"; # subroutines sub hello { local $var = 10; print $var, "\n"; &gogo; # calling subroutine gogo print $var, "\n"; } sub gogo { $var ++; } This should print: 4 10 11 4 A: Quoting from Learning Perl: But local is misnamed, or at least misleadingly named. Our friend Chip Salzenberg says that if he ever gets a chance to go back in a time machine to 1986 and give Larry one piece of advice, he'd tell Larry to call local by the name "save" instead.[14] That's because local actually will save the given global variable's value away, so it will later automatically be restored to the global variable. (That's right: these so-called "local" variables are actually globals!) This save-and-restore mechanism is the same one we've already seen twice now, in the control variable of a foreach loop, and in the @_ array of subroutine parameters. So, local saves a global variable's current value and then set it to some form of empty value. You'll often see it used to slurp an entire file, rather than leading just a line: my $file_content; { local $/; open IN, "foo.txt"; $file_content = <IN>; } Calling local $/ sets the input record separator (the value that Perl stops reading a "line" at) to an empty value, causing the spaceship operator to read the entire file, so it never hits the input record separator. A: "my" variables are visible in the current code block only. "local" variables are also visible where ever they were visible before. For example, if you say "my $x;" and call a sub-function, it cannot see that variable $x. But if you say "local $/;" (to null out the value of the record separator) then you change the way reading from files works in any functions you call. In practice, you almost always want "my", not "local". A: Look at the following code and its output to understand the difference. our $name = "Abhishek"; sub sub1 { print "\nName = $name\n"; local $name = "Abhijeet"; &sub2; &sub3; } sub sub2 { print "\nName = $name\n"; } sub sub3 { my $name = "Abhinav"; print "\nName = $name\n"; } &sub1; Output is : Name = Abhishek Name = Abhijeet Name = Abhinav A: I can’t believe no one has linked to Mark Jason Dominus’ exhaustive treatises on the matter: * *Coping with Scoping *And afterwards, if you want to know what local is good for after all,Seven Useful Uses of local A: http://perldoc.perl.org/perlsub.html#Private-Variables-via-my() Unlike dynamic variables created by the local operator, lexical variables declared with my are totally hidden from the outside world, including any called subroutines. This is true if it's the same subroutine called from itself or elsewhere--every call gets its own copy. http://perldoc.perl.org/perlsub.html#Temporary-Values-via-local() A local modifies its listed variables to be "local" to the enclosing block, eval, or do FILE --and to any subroutine called from within that block. A local just gives temporary values to global (meaning package) variables. It does not create a local variable. This is known as dynamic scoping. Lexical scoping is done with my, which works more like C's auto declarations. I don't think this is at all unclear, other than to say that by "local to the enclosing block", what it means is that the original value is restored when the block is exited. A: &s; sub s() { local $s="5"; &b; print $s; } sub b() { $s++; } The above script prints 6. But if we change local to my it will print 5. This is the difference. Simple. A: It will differ only when you have a subroutine called within a subroutine, for example: sub foo { print "$x\n"; } sub bar { my $x; $x = 2; foo(); } bar(); It prints nothing as $x is limited by {} of bar and not visible to called subroutines, for example: sub foo { print "$x\n"; } sub bar { local $x; $x = 2; foo(); } bar(); It will print 2 as local variables are visible to called subroutines. A: dinomite's example of using local to redefine the record delimiter is the only time I have ran across in a lot of perl programming. I live in a niche perl environment [security programming], but it really is a rarely used scope in my experience. A: I think the easiest way to remember it is this way. MY creates a new variable. LOCAL temporarily changes the value of an existing variable. A: #!/usr/bin/perl sub foo { print ", x is $x\n"; } sub testdefault { $x++; foo(); } # prints 2 sub testmy { my $x; $x++; foo(); } # prints 1 sub testlocal { local $x = 2; foo(); } # prints 2. new set mandatory print "Default, everything is global"; $x = 1; testdefault(); print "My does not affect function calls outside"; $x = 1; testmy(); print "local is everything after this but initializes a new"; $x = 1; testlocal(); As mentioned in testlocal comment, declaring "local $x;" means that $x is now undef
{ "language": "en", "url": "https://stackoverflow.com/questions/129607", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "73" }
Q: Cannot Access http://:8080 I've installed TFS 2008, but I can't seem to access the server. When I try to connect to it in Visual Studio, I can't. If I try by browser on a remote PC, I get a generic page cannot be displayed. On the server, I get a 403. Nothing was touched in IIS and the service is running as a Network Service. Any ideas? A: try: http://localhost:8080/Services/V1.0/ServerStatus.asmx. This will tell you if TFS is up and running. If you are getting anything else you need to look into IIS issues. A: I wrote a blog post on diagnosing these types of TFS connections. http://blogs.msdn.com/granth/archive/2008/06/26/troubleshooting-connections-to-tfs.aspx The very first thing I do is confirm that it works for a known-good configuration – usually my workstation. Providing that works and the server appears to be functioning, the next thing I do is ask the user to call the CheckAuthentication web service using Internet Explorer. The URL for this is: http://TFSSERVER:8080/services/v1.0/ServerStatus.asmx?op=CheckAuthentication By doing this check, I am doing four things: * *Eliminating Team Explorer from the picture *Eliminating the .NET networking stack from the picture *Ensuring that Windows Authentication is working correctly (that’s why I say IE) *Ensuring that proxy settings are set correctly In most cases I’ve seen, the TFS connection issues are because the proxy settings have changed or are incorrect. Because .NET and Visual Studio use the proxy settings from Internet Explorer, it’s important to have them set correctly. In rare cases it’s beyond this. That’s when I start looking at things like: * *Can you resolve the server name? *Can you connect using the IP address? *Are there HOSTS file entries? (see: c:\windows\system32\drivers\etc\hosts) *Can you ping the server? *Can you telnet to port 8080? *Does the user actually have access? Run TfsSecurity.exe /server:servername /im n:DOMAIN\User to check their group memberships *Have you changed your domain password lately? In some cases they’ll need to logoff the workstation and log back on again to get a new security token. *Is the computer's domain certificate valid? update the certificate: gpupdate /force Hope this helps. A: Turns out the time and date on my computer was not "close enough" to the time and date on the tfs server. Changed my system clock setting and problem went away. A: What happens if you send a simple HTTP request to the server directly? ie: telnet 8080 [enter] GET / HTTP/1.1[enter] [enter] [enter] That might give a hint about whether IIS is actually serving anything. If you can do that on the server, what about from a different machine? If the results are different a good guess is there are some security/firewall issues somewhere. HTH a little. A: I went through everything on a similar problem. I logged onto my tfs server and connected directly there. I also used a TFS admin tool I downloaded some time ago from Microsoft, and made sure I was in all the right groups and projects. I then went back to the client PC with the problem, tried the services/1.0/serverstatus.asmx?op=CheckAuthentication Url again, and it worked this time. AFter that full service was restored to my PC. So I don't have the exact answer, but I would go through the checklists presented by Grant Holliday in his answer. A: Add this to the cases for future users, as i had this issue on server 2016... if your firewall allow only Domain and Private Network, it may not work on client. make sure you give public permission, if server network is set to public... The error you may face: ERR_CONNECTION_TIMED_OUT for http://fserver:8080/tfs
{ "language": "en", "url": "https://stackoverflow.com/questions/129618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What is declarative programming? I keep hearing this term tossed around in several different contexts. What is it? A: The other answers already do a fantastic job explaining what declarative programming is, so I'm just going to provide some examples of why that might be useful. Context Independence Declarative Programs are context-independent. Because they only declare what the ultimate goal is, but not the intermediary steps to reach that goal, the same program can be used in different contexts. This is hard to do with imperative programs, because they often depend on the context (e.g. hidden state). Take yacc as an example. It's a parser generator aka. compiler compiler, an external declarative DSL for describing the grammar of a language, so that a parser for that language can automatically be generated from the description. Because of its context independence, you can do many different things with such a grammar: * *Generate a C parser for that grammar (the original use case for yacc) *Generate a C++ parser for that grammar *Generate a Java parser for that grammar (using Jay) *Generate a C# parser for that grammar (using GPPG) *Generate a Ruby parser for that grammar (using Racc) *Generate a tree visualization for that grammar (using GraphViz) *simply do some pretty-printing, fancy-formatting and syntax highlighting of the yacc source file itself and include it in your Reference Manual as a syntactic specification of your language And many more … Optimization Because you don't prescribe the computer which steps to take and in what order, it can rearrange your program much more freely, maybe even execute some tasks in parallel. A good example is a query planner and query optimizer for a SQL database. Most SQL databases allow you to display the query that they are actually executing vs. the query that you asked them to execute. Often, those queries look nothing like each other. The query planner takes things into account that you wouldn't even have dreamed of: rotational latency of the disk platter, for example or the fact that some completely different application for a completely different user just executed a similar query and the table that you are joining with and that you worked so hard to avoid loading is already in memory anyway. There is an interesting trade-off here: the machine has to work harder to figure out how to do something than it would in an imperative language, but when it does figure it out, it has much more freedom and much more information for the optimization stage. A: Declarative programming is the picture, where imperative programming is instructions for painting that picture. You're writing in a declarative style if you're "Telling it what it is", rather than describing the steps the computer should take to get to where you want it. When you use XML to mark-up data, you're using declarative programming because you're saying "This is a person, that is a birthday, and over there is a street address". Some examples of where declarative and imperative programming get combined for greater effect: * *Windows Presentation Foundation uses declarative XML syntax to describe what a user interface looks like, and what the relationships (bindings) are between controls and underlying data structures. *Structured configuration files use declarative syntax (as simple as "key=value" pairs) to identify what a string or value of data means. *HTML marks up text with tags that describe what role each piece of text has in relation to the whole document. A: Declarative Programming is programming with declarations, i.e. declarative sentences. Declarative sentences have a number of properties that distinguish them from imperative sentences. In particular, declarations are: * *commutative (can be reordered) *associative (can be regrouped) *idempotent (can repeat without change in meaning) *monotonic (declarations don't subtract information) A relevant point is that these are all structural properties and are orthogonal to subject matter. Declarative is not about "What vs. How". We can declare (represent and constrain) a "how" just as easily as we declare a "what". Declarative is about structure, not content. Declarative programming has a significant impact on how we abstract and refactor our code, and how we modularize it into subprograms, but not so much on the domain model. Often, we can convert from imperative to declarative by adding context. E.g. from "Turn left. (... wait for it ...) Turn Right." to "Bob will turn left at intersection of Foo and Bar at 11:01. Bob will turn right at the intersection of Bar and Baz at 11:06." Note that in the latter case the sentences are idempotent and commutative, whereas in the former case rearranging or repeating the sentences would severely change the meaning of the program. Regarding monotonic, declarations can add constraints which subtract possibilities. But constraints still add information (more precisely, constraints are information). If we need time-varying declarations, it is typical to model this with explicit temporal semantics - e.g. from "the ball is flat" to "the ball is flat at time T". If we have two contradictory declarations, we have an inconsistent declarative system, though this might be resolved by introducing soft constraints (priorities, probabilities, etc.) or leveraging a paraconsistent logic. A: Describing to a computer what you want, not how to do something. A: imagine an excel page. With columns populated with formulas to calculate you tax return. All the logic is done declared in the cells, the order of the calculation is by determine by formula itself rather than procedurally. That is sort of what declarative programming is all about. You declare the problem space and the solution rather than the flow of the program. Prolog is the only declarative language I've use. It requires a different kind of thinking but it's good to learn if just to expose you to something other than the typical procedural programming language. A: I have refined my understanding of declarative programming, since Dec 2011 when I provided an answer to this question. Here follows my current understanding. The long version of my understanding (research) is detailed at this link, which you should read to gain a deep understanding of the summary I will provide below. Imperative programming is where mutable state is stored and read, thus the ordering and/or duplication of program instructions can alter the behavior (semantics) of the program (and even cause a bug, i.e. unintended behavior). In the most naive and extreme sense (which I asserted in my prior answer), declarative programming (DP) is avoiding all stored mutable state, thus the ordering and/or duplication of program instructions can NOT alter the behavior (semantics) of the program. However, such an extreme definition would not be very useful in the real world, since nearly every program involves stored mutable state. The spreadsheet example conforms to this extreme definition of DP, because the entire program code is run to completion with one static copy of the input state, before the new states are stored. Then if any state is changed, this is repeated. But most real world programs can't be limited to such a monolithic model of state changes. A more useful definition of DP is that the ordering and/or duplication of programming instructions do not alter any opaque semantics. In other words, there are not hidden random changes in semantics occurring-- any changes in program instruction order and/or duplication cause only intended and transparent changes to the program's behavior. The next step would be to talk about which programming models or paradigms aid in DP, but that is not the question here. A: It's a method of programming based around describing what something should do or be instead of describing how it should work. In other words, you don't write algorithms made of expressions, you just layout how you want things to be. Two good examples are HTML and WPF. This Wikipedia article is a good overview: http://en.wikipedia.org/wiki/Declarative_programming A: Since I wrote my prior answer, I have formulated a new definition of the declarative property which is quoted below. I have also defined imperative programming as the dual property. This definition is superior to the one I provided in my prior answer, because it is succinct and it is more general. But it may be more difficult to grok, because the implication of the incompleteness theorems applicable to programming and life in general are difficult for humans to wrap their mind around. The quoted explanation of the definition discusses the role pure functional programming plays in declarative programming. Declarative vs. Imperative The declarative property is weird, obtuse, and difficult to capture in a technically precise definition that remains general and not ambiguous, because it is a naive notion that we can declare the meaning (a.k.a semantics) of the program without incurring unintended side effects. There is an inherent tension between expression of meaning and avoidance of unintended effects, and this tension actually derives from the incompleteness theorems of programming and our universe. It is oversimplification, technically imprecise, and often ambiguous to define declarative as “what to do” and imperative as “how to do”. An ambiguous case is the “what” is the “how” in a program that outputs a program— a compiler. Evidently the unbounded recursion that makes a language Turing complete, is also analogously in the semantics— not only in the syntactical structure of evaluation (a.k.a. operational semantics). This is logically an example analogous to Gödel's theorem— “any complete system of axioms is also inconsistent”. Ponder the contradictory weirdness of that quote! It is also an example that demonstrates how the expression of semantics does not have a provable bound, thus we can't prove2 that a program (and analogously its semantics) halt a.k.a. the Halting theorem. The incompleteness theorems derive from the fundamental nature of our universe, which as stated in the Second Law of Thermodynamics is “the entropy (a.k.a. the # of independent possibilities) is trending to maximum forever”. The coding and design of a program is never finished— it's alive!— because it attempts to address a real world need, and the semantics of the real world are always changing and trending to more possibilities. Humans never stop discovering new things (including errors in programs ;-). To precisely and technically capture this aforementioned desired notion within this weird universe that has no edge (ponder that! there is no “outside” of our universe), requires a terse but deceptively-not-simple definition which will sound incorrect until it is explained deeply. Definition: The declarative property is where there can exist only one possible set of statements that can express each specific modular semantic. The imperative property3 is the dual, where semantics are inconsistent under composition and/or can be expressed with variations of sets of statements. This definition of declarative is distinctively local in semantic scope, meaning that it requires that a modular semantic maintain its consistent meaning regardless where and how it's instantiated and employed in global scope. Thus each declarative modular semantic should be intrinsically orthogonal to all possible others— and not an impossible (due to incompleteness theorems) global algorithm or model for witnessing consistency, which is also the point of “More Is Not Always Better” by Robert Harper, Professor of Computer Science at Carnegie Mellon University, one of the designers of Standard ML. Examples of these modular declarative semantics include category theory functors e.g. the Applicative, nominal typing, namespaces, named fields, and w.r.t. to operational level of semantics then pure functional programming. Thus well designed declarative languages can more clearly express meaning, albeit with some loss of generality in what can be expressed, yet a gain in what can be expressed with intrinsic consistency. An example of the aforementioned definition is the set of formulas in the cells of a spreadsheet program— which are not expected to give the same meaning when moved to different column and row cells, i.e. cell identifiers changed. The cell identifiers are part of and not superfluous to the intended meaning. So each spreadsheet result is unique w.r.t. to the cell identifiers in a set of formulas. The consistent modular semantic in this case is use of cell identifiers as the input and output of pure functions for cells formulas (see below). Hyper Text Markup Language a.k.a. HTML— the language for static web pages— is an example of a highly (but not perfectly3) declarative language that (at least before HTML 5) had no capability to express dynamic behavior. HTML is perhaps the easiest language to learn. For dynamic behavior, an imperative scripting language such as JavaScript was usually combined with HTML. HTML without JavaScript fits the declarative definition because each nominal type (i.e. the tags) maintains its consistent meaning under composition within the rules of the syntax. A competing definition for declarative is the commutative and idempotent properties of the semantic statements, i.e. that statements can be reordered and duplicated without changing the meaning. For example, statements assigning values to named fields can be reordered and duplicated without changed the meaning of the program, if those names are modular w.r.t. to any implied order. Names sometimes imply an order, e.g. cell identifiers include their column and row position— moving a total on spreadsheet changes its meaning. Otherwise, these properties implicitly require global consistency of semantics. It is generally impossible to design the semantics of statements so they remain consistent if randomly ordered or duplicated, because order and duplication are intrinsic to semantics. For example, the statements “Foo exists” (or construction) and “Foo does not exist” (and destruction). If one considers random inconsistency endemical of the intended semantics, then one accepts this definition as general enough for the declarative property. In essence this definition is vacuous as a generalized definition because it attempts to make consistency orthogonal to semantics, i.e. to defy the fact that the universe of semantics is dynamically unbounded and can't be captured in a global coherence paradigm. Requiring the commutative and idempotent properties for the (structural evaluation order of the) lower-level operational semantics converts operational semantics to a declarative localized modular semantic, e.g. pure functional programming (including recursion instead of imperative loops). Then the operational order of the implementation details do not impact (i.e. spread globally into) the consistency of the higher-level semantics. For example, the order of evaluation of (and theoretically also the duplication of) the spreadsheet formulas doesn't matter because the outputs are not copied to the inputs until after all outputs have been computed, i.e. analogous to pure functions. C, Java, C++, C#, PHP, and JavaScript aren't particularly declarative. Copute's syntax and Python's syntax are more declaratively coupled to intended results, i.e. consistent syntactical semantics that eliminate the extraneous so one can readily comprehend code after they've forgotten it. Copute and Haskell enforce determinism of the operational semantics and encourage “don't repeat yourself” (DRY), because they only allow the pure functional paradigm. 2 Even where we can prove the semantics of a program, e.g. with the language Coq, this is limited to the semantics that are expressed in the typing, and typing can never capture all of the semantics of a program— not even for languages that are not Turing complete, e.g. with HTML+CSS it is possible to express inconsistent combinations which thus have undefined semantics. 3 Many explanations incorrectly claim that only imperative programming has syntactically ordered statements. I clarified this confusion between imperative and functional programming. For example, the order of HTML statements does not reduce the consistency of their meaning. Edit: I posted the following comment to Robert Harper's blog: in functional programming ... the range of variation of a variable is a type Depending on how one distinguishes functional from imperative programming, your ‘assignable’ in an imperative program also may have a type placing a bound on its variability. The only non-muddled definition I currently appreciate for functional programming is a) functions as first-class objects and types, b) preference for recursion over loops, and/or c) pure functions— i.e. those functions which do not impact the desired semantics of the program when memoized (thus perfectly pure functional programming doesn't exist in a general purpose denotational semantics due to impacts of operational semantics, e.g. memory allocation). The idempotent property of a pure function means the function call on its variables can be substituted by its value, which is not generally the case for the arguments of an imperative procedure. Pure functions seem to be declarative w.r.t. to the uncomposed state transitions between the input and result types. But the composition of pure functions does not maintain any such consistency, because it is possible to model a side-effect (global state) imperative process in a pure functional programming language, e.g. Haskell's IOMonad and moreover it is entirely impossible to prevent doing such in any Turing complete pure functional programming language. As I wrote in 2012 which seems to the similar consensus of comments in your recent blog, that declarative programming is an attempt to capture the notion that the intended semantics are never opaque. Examples of opaque semantics are dependence on order, dependence on erasure of higher-level semantics at the operational semantics layer (e.g. casts are not conversions and reified generics limit higher-level semantics), and dependence on variable values which can not be checked (proved correct) by the programming language. Thus I have concluded that only non-Turing complete languages can be declarative. Thus one unambiguous and distinct attribute of a declarative language could be that its output can be proven to obey some enumerable set of generative rules. For example, for any specific HTML program (ignoring differences in the ways interpreters diverge) that is not scripted (i.e. is not Turing complete) then its output variability can be enumerable. Or more succinctly an HTML program is a pure function of its variability. Ditto a spreadsheet program is a pure function of its input variables. So it seems to me that declarative languages are the antithesis of unbounded recursion, i.e. per Gödel's second incompleteness theorem self-referential theorems can't be proven. Lesie Lamport wrote a fairytale about how Euclid might have worked around Gödel's incompleteness theorems applied to math proofs in the programming language context by to congruence between types and logic (Curry-Howard correspondence, etc). A: Declarative programming is "the act of programming in languages that conform to the mental model of the developer rather than the operational model of the machine". The difference between declarative and imperative programming is well illustrated by the problem of parsing structured data. An imperative program would use mutually recursive functions to consume input and generate data. A declarative program would express a grammar that defines the structure of the data so that it can then be parsed. The difference between these two approaches is that the declarative program creates a new language that is more closely mapped to the mental model of the problem than is its host language. A: Loosely: Declarative programming tends towards:- * *Sets of declarations, or declarative statements, each of which has meaning (often in the problem domain) and may be understood independently and in isolation. Imperative programming tends towards:- * *Sequences of commands, each of which perform some action; but which may or may not have meaning in the problem domain. As a result, an imperative style helps the reader to understand the mechanics of what the system is actually doing, but may give little insight into the problem that it is intended to solve. On the other hand, a declarative style helps the reader to understand the problem domain and the approach that the system takes towards the solution of the problem, but is less informative on the matter of mechanics. Real programs (even ones written in languages that favor the ends of the spectrum, such as ProLog or C) tend to have both styles present to various degrees at various points, to satisfy the varying complexities and communication needs of the piece. One style is not superior to the other; they just serve different purposes, and, as with many things in life, moderation is key. A: It may sound odd, but I'd add Excel (or any spreadsheet really) to the list of declarative systems. A good example of this is given here. A: Here's an example. In CSS (used to style HTML pages), if you want an image element to be 100 pixels high and 100 pixels wide, you simply "declare" that that's what you want as follows: #myImageId { height: 100px; width: 100px; } You can consider CSS a declarative "style sheet" language. The browser engine that reads and interprets this CSS is free to make the image appear this tall and this wide however it wants. Different browser engines (e.g., the engine for IE, the engine for Chrome) will implement this task differently. Their unique implementations are, of course, NOT written in a declarative language but in a procedural one like Assembly, C, C++, Java, JavaScript, or Python. That code is a bunch of steps to be carried out step by step (and might include function calls). It might do things like interpolate pixel values, and render on the screen. A: Declarative programming is when you write your code in such a way that it describes what you want to do, and not how you want to do it. It is left up to the compiler to figure out the how. Examples of declarative programming languages are SQL and Prolog. A: I am sorry, but I must disagree with many of the other answers. I would like to stop this muddled misunderstanding of the definition of declarative programming. Definition Referential transparency (RT) of the sub-expressions is the only required attribute of a declarative programming expression, because it is the only attribute which is not shared with imperative programming. Other cited attributes of declarative programming, derive from this RT. Please click the hyperlink above for the detailed explanation. Spreadsheet example Two answers mentioned spreadsheet programming. In the cases where the spreadsheet programming (a.k.a. formulas) does not access mutable global state, then it is declarative programming. This is because the mutable cell values are the monolithic input and output of the main() (the entire program). The new values are not written to the cells after each formula is executed, thus they are not mutable for the life of the declarative program (execution of all the formulas in the spreadsheet). Thus relative to each other, the formulas view these mutable cells as immutable. An RT function is allowed to access immutable global state (and also mutable local state). Thus the ability to mutate the values in the cells when the program terminates (as an output from main()), does not make them mutable stored values in the context of the rules. The key distinction is the cell values are not updated after each spreadsheet formula is performed, thus the order of performing the formulas does not matter. The cell values are updated after all the declarative formulas have been performed. A: I'd explain it as DP is a way to express * *A goal expression, the conditions for - what we are searching for. Is there one, maybe or many? *Some known facts *Rules that extend the know facts ...and where there is a deduct engine usually working with a unification algorithm to find the goals. A: It depends on how you submit the answer to the text. Overall you can look at the programme at a certain view but it depends what angle you look at the problem. I will get you started with the programme: Dim Bus, Car, Time, Height As Integr Again it depends on what the problem is an overall. You might have to shorten it due to the programme. Hope this helps and need the feedback if it does not. Thank You. A: A couple other examples of declarative programming: * *ASP.Net markup for databinding. It just says "fill this grid with this source", for example, and leaves it to the system for how that happens. *Linq expressions Declarative programming is nice because it can help simplify your mental model* of code, and because it might eventually be more scalable. For example, let's say you have a function that does something to each element in an array or list. Traditional code would look like this: foreach (object item in MyList) { DoSomething(item); } No big deal there. But what if you use the more-declarative syntax and instead define DoSomething() as an Action? Then you can say it this way: MyList.ForEach(DoSometing); This is, of course, more concise. But I'm sure you have more concerns than just saving two lines of code here and there. Performance, for example. The old way, processing had to be done in sequence. What if the .ForEach() method had a way for you to signal that it could handle the processing in parallel, automatically? Now all of a sudden you've made your code multi-threaded in a very safe way and only changed one line of code. And, in fact, there's a an extension for .Net that lets you do just that. * *If you follow that link, it takes you to a blog post by a friend of mine. The whole post is a little long, but you can scroll down to the heading titled "The Problem" _and pick it up there no problem.* A: As far as I can tell, it started being used to describe programming systems like Prolog, because prolog is (supposedly) about declaring things in an abstract way. It increasingly means very little, as it has the definition given by the users above. It should be clear that there is a gulf between the declarative programming of Haskell, as against the declarative programming of HTML.
{ "language": "en", "url": "https://stackoverflow.com/questions/129628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "193" }
Q: How to create a C# Login handler How would I go about creating a web app login handler in C#? In Java I would use a JSP that posts the username and password to a servlet, which then delegates to a POJO - for the db lookup and validation. If validation fails the servlet forwards onto the login.jsp for another attempt, if successfull then forwards to the secure resource. A: Look into Forms Authentication. A: Mainly, it's a terminology issue. Let me translate your Java to ASP.NET MVC: In ASP.NET MVC, I would use an HTML view that posts the username and password to a Controller action, which then delegates to a POCO - for the db lookup and validation. If validation fails the Controller renders the Login view for another attempt, if successful then forwards to the secure resource. And, to WebForms: In ASP.NET WebForms, I would use a LoginControl that postbacks the username and password back to the Login.aspx codebehind, which then delegates to a POCO - for the db lookup and validation. If validation fails the Login.aspx page would be shown again for another attempt, if successful then redirects to the secure resource. A: In asp.net web form model each page posts back to itself. create an function that is tied to a button click to do the db lookup and validation. Here is an example in VB.net that can be easily converted to c# using a vb to c# converter: http://www.sitepoint.com/article/securing-passwords-database/ A: The .NET Web Site Administration Tool in Visual Studio is great for a first timer.
{ "language": "en", "url": "https://stackoverflow.com/questions/129629", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to play a sound file With C#, How do I play (Pause, Forward...) a sound file (mp3, ogg)? The file could be on the hard disk, or on the internet. Is there any library or Class out there that can ease me the work ? A: If you don't mind including Microsoft.VisualBasic.dll in your project, you can do it this way: var audio = new Microsoft.VisualBasic.Devices.Audio(); audio.Play("some file path"); If you want to do more complex stuff, the easiest way I know of is to use the Windows Media Player API. You add the DLL and then work with it. The API is kind of clunky, but it does work; I've used it to make my own music player wrapper around Windows Media Player for personal use. Here are some helpful links to get you started: Building a Web Site with ASP .NET 2.0 to Navigate Your Music Library Windows Media Object Model Let the Music Play! EDIT: Since I wrote this, I've found an easier way, if you don't mind including WPF classes in your code. WPF (.NET 3.0 and forward) has a MediaPlayer class that's a wrapper around Windows Media Player. This means you don't have to write your own wrapper, which is nice since, as I mentioned above, the WMP API is rather clunky and hard to use. A: I would recommend the BASS Library. It can play both filebased music files and streaming content. There is also a .NET wrapper available. A: Alvas.Audio has RecordPlayer class with these possibilities: public static void TestRecordPlayer() { RecordPlayer rp = new RecordPlayer(); rp.PropertyChanged += new PropertyChangedEventHandler(rp_PropertyChanged); rp.Open(new Mp3Reader(File.OpenRead("in.mp3"))); rp.Play(); rp.Forward(1000); rp.Pause(); } static void rp_PropertyChanged(object sender, PropertyChangedEventArgs e) { switch (e.PropertyName) { case RecordPlayer.StateProperty: RecordPlayer rp = ((RecordPlayer)sender); if (rp.State == DeviceState.Stopped) { rp.Close(); } break; } } A: use PlaySound API call A: There's a media player control - basically what Media Player uses. You can put that in your program and there's an API you can use to control it. I think it's the best quick solution.
{ "language": "en", "url": "https://stackoverflow.com/questions/129642", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: C# + Castle ActiveRecord: HasAndBelongsToMany and collections Let's say I have many-to-many relationship (using the ActiveRecord attribute HasAndBelongsToMany) between Posts and Tags (domain object names changed to protect the innocent), and I wanted a method like FindAllPostByTags(IList<Tag> tags) that returns all Posts that have all (not just some of) the Tags in the parameter. Any way I could accomplish this either with NHibernate Expressions or HQL? I've searched through the HQL documentation and couldn't find anything that suited my needs. I hope I'm just missing something obvious! A: You could also just use an IN statement DetachedCriteria query = DetachedCriteria.For<Post>(); query.CreateCriteria("Post").Add(Expression.In("TagName", string.Join(",",tags.ToArray()) ); I haven't compiled that so it could have errors A: I don't have a system at hand with a Castle install right now, so I didn't test or compile this, but the code below should about do what you want. Junction c = Expression.Conjunction(); foreach(Tag t in tags) c = c.Add( Expression.Eq("Tag", t); return sess.CreateCriteria(typeof(Post)).Add(c).List(); A: I just had the same problem and tried to read the HQL-documentation, however some of the features doesn't seem to be implemented in NHibernate (with-keyword for example) I ended up with this sort of solution: select p FROM Post p JOIN p.Tags tag1 JOIN p.Tags tag2 WHERE tag1.Id = 1 tag2.Id = 2 Meaning, dynamically build the HQL using join for each tag, then make the selection in your WHERE clause. This worked for me. I tried doing the same thing with a DetachedCriteria but ran into trouble when trying to join the table multiple times.
{ "language": "en", "url": "https://stackoverflow.com/questions/129650", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I keep a DIV from expanding to take up all available width? In the following HTML, I'd like the frame around the image to be snug -- not to stretch out and take up all the available width in the parent container. I know there are a couple of ways to do this (including horrible things like manually setting its width to a particular number of pixels), but what is the right way? Edit: One answer suggests I turn off "display:block" -- but this causes the rendering to look malformed in every browser I've tested it in. Is there a way to get a nice-looking rendering with "display:block" off? Edit: If I add "float: left" to the pictureframe and "clear:both" to the P tag, it looks great. But I don't always want these frames floated to the left. Is there a more direct way to accomplish whatever "float" is doing? .pictureframe { display: block; margin: 5px; padding: 5px; border: solid brown 2px; background-color: #ffeecc; } #foo { border: solid blue 2px; float: left; } img { display: block; } <div id="foo"> <span class="pictureframe"> <img alt='' src="http://stackoverflow.com/favicon.ico" /> </span> <p> Why is the beige rectangle so wide? </p> </div> A: The beige rectangle is so wide because you have display: block on the span, turning an inline element into a block element. A block element is supposed to take up all available width, an inline element does not. Try removing the display: block from the css. A: Adding "float:left" to the span.pictureFrame selector fixes the problem as that's what "float:left" does :) Apart from everything else floating an element to the left will make it occupy only the space required by its contents. Any following block elements (the "p" for example) will float around the "floated" element. If you "clear" the float of the "p" it would follow the normal document flow thus going below span.pictureFrame. In fact you need "clear:left" as the element has been "float:left"-ed. For a more formal explanation you can check the CSS spec although it is beyond most people's comprehension. A: The right way is to use: .pictureframe { display: inline-block; } Edit: Floating the element also produces the same effect, this is because floating elements use the same shrink-to-fit algorithm for determining the width. A: Yes display:inline-block is your friend. Also have a look at: display:-moz-inline-block and display:-moz-inline-box. A: The only way I've been able to do picture frames reliably across browsers is to set the width dynamically. Here is an example using jQuery: $(window).load(function(){ $('img').wrap('<div class="pictureFrame"></div>'); $('div.pictureFrame').each(function(i) { $(this).width($('*:first', this).width()); }); }); This will work even if you don't know the image dimensions ahead of time, because it waits for the images to load (note we're using $(window).load rather than the more common $(document).ready) before adding the picture frame. It's a bit ugly, but it works. Here is the pictureFrame CSS for this example: .pictureFrame { background-color:#FFFFFF; border:1px solid #CCCCCC; line-height:0; padding:5px; } I'd love to see a reliable, cross-browser, CSS-only solution to this problem. This solution is something I came up with for a past project after much frustration trying to get it working with only CSS and HTML.
{ "language": "en", "url": "https://stackoverflow.com/questions/129651", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: How can I sanitize user input with PHP? Is there a catchall function somewhere that works well for sanitizing user input for SQL injection and XSS attacks, while still allowing certain types of HTML tags? A: No. You can't generically filter data without any context of what it's for. Sometimes you'd want to take a SQL query as input and sometimes you'd want to take HTML as input. You need to filter input on a whitelist -- ensure that the data matches some specification of what you expect. Then you need to escape it before you use it, depending on the context in which you are using it. The process of escaping data for SQL - to prevent SQL injection - is very different from the process of escaping data for (X)HTML, to prevent XSS. A: If you're using PostgreSQL, the input from PHP can be escaped with pg_escape_literal() $username = pg_escape_literal($_POST['username']); From the documentation: pg_escape_literal() escapes a literal for querying the PostgreSQL database. It returns an escaped literal in the PostgreSQL format. A: PHP has the new nice filter_input functions now, that for instance liberate you from finding 'the ultimate e-mail regex' now that there is a built-in FILTER_VALIDATE_EMAIL type My own filter class (uses JavaScript to highlight faulty fields) can be initiated by either an ajax request or normal form post. (see the example below) <? /** * Pork Formvalidator. validates fields by regexes and can sanitize them. Uses PHP filter_var built-in functions and extra regexes * @package pork */ /** * Pork.FormValidator * Validates arrays or properties by setting up simple arrays. * Note that some of the regexes are for dutch input! * Example: * * $validations = array('name' => 'anything','email' => 'email','alias' => 'anything','pwd'=>'anything','gsm' => 'phone','birthdate' => 'date'); * $required = array('name', 'email', 'alias', 'pwd'); * $sanitize = array('alias'); * * $validator = new FormValidator($validations, $required, $sanitize); * * if($validator->validate($_POST)) * { * $_POST = $validator->sanitize($_POST); * // now do your saving, $_POST has been sanitized. * die($validator->getScript()."<script type='text/javascript'>alert('saved changes');</script>"); * } * else * { * die($validator->getScript()); * } * * To validate just one element: * $validated = new FormValidator()->validate('blah@bla.', 'email'); * * To sanitize just one element: * $sanitized = new FormValidator()->sanitize('<b>blah</b>', 'string'); * * @package pork * @author SchizoDuckie * @copyright SchizoDuckie 2008 * @version 1.0 * @access public */ class FormValidator { public static $regexes = Array( 'date' => "^[0-9]{1,2}[-/][0-9]{1,2}[-/][0-9]{4}\$", 'amount' => "^[-]?[0-9]+\$", 'number' => "^[-]?[0-9,]+\$", 'alfanum' => "^[0-9a-zA-Z ,.-_\\s\?\!]+\$", 'not_empty' => "[a-z0-9A-Z]+", 'words' => "^[A-Za-z]+[A-Za-z \\s]*\$", 'phone' => "^[0-9]{10,11}\$", 'zipcode' => "^[1-9][0-9]{3}[a-zA-Z]{2}\$", 'plate' => "^([0-9a-zA-Z]{2}[-]){2}[0-9a-zA-Z]{2}\$", 'price' => "^[0-9.,]*(([.,][-])|([.,][0-9]{2}))?\$", '2digitopt' => "^\d+(\,\d{2})?\$", '2digitforce' => "^\d+\,\d\d\$", 'anything' => "^[\d\D]{1,}\$" ); private $validations, $sanatations, $mandatories, $errors, $corrects, $fields; public function __construct($validations=array(), $mandatories = array(), $sanatations = array()) { $this->validations = $validations; $this->sanitations = $sanitations; $this->mandatories = $mandatories; $this->errors = array(); $this->corrects = array(); } /** * Validates an array of items (if needed) and returns true or false * */ public function validate($items) { $this->fields = $items; $havefailures = false; foreach($items as $key=>$val) { if((strlen($val) == 0 || array_search($key, $this->validations) === false) && array_search($key, $this->mandatories) === false) { $this->corrects[] = $key; continue; } $result = self::validateItem($val, $this->validations[$key]); if($result === false) { $havefailures = true; $this->addError($key, $this->validations[$key]); } else { $this->corrects[] = $key; } } return(!$havefailures); } /** * * Adds unvalidated class to thos elements that are not validated. Removes them from classes that are. */ public function getScript() { if(!empty($this->errors)) { $errors = array(); foreach($this->errors as $key=>$val) { $errors[] = "'INPUT[name={$key}]'"; } $output = '$$('.implode(',', $errors).').addClass("unvalidated");'; $output .= "new FormValidator().showMessage();"; } if(!empty($this->corrects)) { $corrects = array(); foreach($this->corrects as $key) { $corrects[] = "'INPUT[name={$key}]'"; } $output .= '$$('.implode(',', $corrects).').removeClass("unvalidated");'; } $output = "<script type='text/javascript'>{$output} </script>"; return($output); } /** * * Sanitizes an array of items according to the $this->sanitations * sanitations will be standard of type string, but can also be specified. * For ease of use, this syntax is accepted: * $sanitations = array('fieldname', 'otherfieldname'=>'float'); */ public function sanitize($items) { foreach($items as $key=>$val) { if(array_search($key, $this->sanitations) === false && !array_key_exists($key, $this->sanitations)) continue; $items[$key] = self::sanitizeItem($val, $this->validations[$key]); } return($items); } /** * * Adds an error to the errors array. */ private function addError($field, $type='string') { $this->errors[$field] = $type; } /** * * Sanitize a single var according to $type. * Allows for static calling to allow simple sanitization */ public static function sanitizeItem($var, $type) { $flags = NULL; switch($type) { case 'url': $filter = FILTER_SANITIZE_URL; break; case 'int': $filter = FILTER_SANITIZE_NUMBER_INT; break; case 'float': $filter = FILTER_SANITIZE_NUMBER_FLOAT; $flags = FILTER_FLAG_ALLOW_FRACTION | FILTER_FLAG_ALLOW_THOUSAND; break; case 'email': $var = substr($var, 0, 254); $filter = FILTER_SANITIZE_EMAIL; break; case 'string': default: $filter = FILTER_SANITIZE_STRING; $flags = FILTER_FLAG_NO_ENCODE_QUOTES; break; } $output = filter_var($var, $filter, $flags); return($output); } /** * * Validates a single var according to $type. * Allows for static calling to allow simple validation. * */ public static function validateItem($var, $type) { if(array_key_exists($type, self::$regexes)) { $returnval = filter_var($var, FILTER_VALIDATE_REGEXP, array("options"=> array("regexp"=>'!'.self::$regexes[$type].'!i'))) !== false; return($returnval); } $filter = false; switch($type) { case 'email': $var = substr($var, 0, 254); $filter = FILTER_VALIDATE_EMAIL; break; case 'int': $filter = FILTER_VALIDATE_INT; break; case 'boolean': $filter = FILTER_VALIDATE_BOOLEAN; break; case 'ip': $filter = FILTER_VALIDATE_IP; break; case 'url': $filter = FILTER_VALIDATE_URL; break; } return ($filter === false) ? false : filter_var($var, $filter) !== false ? true : false; } } Of course, keep in mind that you need to do your sql query escaping too depending on what type of db your are using (mysql_real_escape_string() is useless for an sql server for instance). You probably want to handle this automatically at your appropriate application layer like an ORM. Also, as mentioned above: for outputting to html use the other php dedicated functions like htmlspecialchars ;) For really allowing HTML input with like stripped classes and/or tags depend on one of the dedicated xss validation packages. DO NOT WRITE YOUR OWN REGEXES TO PARSE HTML! A: No, there is not. First of all, SQL injection is an input filtering problem, and XSS is an output escaping one - so you wouldn't even execute these two operations at the same time in the code lifecycle. Basic rules of thumb * *For SQL query, bind parameters *Use strip_tags() to filter out unwanted HTML *Escape all other output with htmlspecialchars() and be mindful of the 2nd and 3rd parameters here. A: You never sanitize input. You always sanitize output. The transforms you apply to data to make it safe for inclusion in an SQL statement are completely different from those you apply for inclusion in HTML are completely different from those you apply for inclusion in Javascript are completely different from those you apply for inclusion in LDIF are completely different from those you apply to inclusion in CSS are completely different from those you apply to inclusion in an Email.... By all means validate input - decide whether you should accept it for further processing or tell the user it is unacceptable. But don't apply any change to representation of the data until it is about to leave PHP land. A long time ago someone tried to invent a one-size fits all mechanism for escaping data and we ended up with "magic_quotes" which didn't properly escape data for all output targets and resulted in different installation requiring different code to work. A: Easiest way to avoid mistakes in sanitizing input and escaping data is using PHP framework like Symfony, Nette etc. or part of that framework (templating engine, database layer, ORM). Templating engine like Twig or Latte has output escaping on by default - you don't have to solve manually if you have properly escaped your output depending on context (HTML or Javascript part of web page). Framework is automatically sanitizing input and you should't use $_POST, $_GET or $_SESSION variables directly, but through mechanism like routing, session handling etc. And for database (model) layer there are ORM frameworks like Doctrine or wrappers around PDO like Nette Database. You can read more about it here - What is a software framework? A: Just wanted to add that on the subject of output escaping, if you use php DOMDocument to make your html output it will automatically escape in the right context. An attribute (value="") and the inner text of a <span> are not equal. To be safe against XSS read this: OWASP XSS Prevention Cheat Sheet A: To address the XSS issue, take a look at HTML Purifier. It is fairly configurable and has a decent track record. As for the SQL injection attacks, the solution is to use prepared statements. The PDO library and mysqli extension support these. A: Do not try to prevent SQL injection by sanitizing input data. Instead, do not allow data to be used in creating your SQL code. Use Prepared Statements (i.e. using parameters in a template query) that uses bound variables. It is the only way to be guaranteed against SQL injection. Please see my website http://bobby-tables.com/ for more about preventing SQL injection. A: PHP 5.2 introduced the filter_var function. It supports a great deal of SANITIZE, VALIDATE filters. A: Methods for sanitizing user input with PHP: * *Use Modern Versions of MySQL and PHP. *Set charset explicitly: * *$mysqli->set_charset("utf8");manual *$pdo = new PDO('mysql:host=localhost;dbname=testdb;charset=UTF8', $user, $password);manual *$pdo->exec("set names utf8");manual *$pdo = new PDO( "mysql:host=$host;dbname=$db", $user, $pass, array( PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION, PDO::MYSQL_ATTR_INIT_COMMAND => "SET NAMES utf8" ) );manual *mysql_set_charset('utf8') [deprecated in PHP 5.5.0, removed in PHP 7.0.0]. *Use secure charsets: * *Select utf8, latin1, ascii.., dont use vulnerable charsets big5, cp932, gb2312, gbk, sjis. *Use spatialized function: * *MySQLi prepared statements: $stmt = $mysqli->prepare('SELECT * FROM test WHERE name = ? LIMIT 1'); $param = "' OR 1=1 /*";$stmt->bind_param('s', $param);$stmt->execute(); *PDO::quote() - places quotes around the input string (if required) and escapes special characters within the input string, using a quoting style appropriate to the underlying driver:$pdo = new PDO('mysql:host=localhost;dbname=testdb;charset=UTF8', $user, $password);explicit set the character set$pdo->setAttribute(PDO::ATTR_EMULATE_PREPARES, false);disable emulating prepared statements to prevent fallback to emulating statements that MySQL can't prepare natively (to prevent injection)$var = $pdo->quote("' OR 1=1 /*");not only escapes the literal, but also quotes it (in single-quote ' characters) $stmt = $pdo->query("SELECT * FROM test WHERE name = $var LIMIT 1"); *PDO Prepared Statements: vs MySQLi prepared statements supports more database drivers and named parameters: $pdo = new PDO('mysql:host=localhost;dbname=testdb;charset=UTF8', $user, $password);explicit set the character set$pdo->setAttribute(PDO::ATTR_EMULATE_PREPARES, false);disable emulating prepared statements to prevent fallback to emulating statements that MySQL can't prepare natively (to prevent injection) $stmt = $pdo->prepare('SELECT * FROM test WHERE name = ? LIMIT 1'); $stmt->execute(["' OR 1=1 /*"]); *mysql_real_escape_string [deprecated in PHP 5.5.0, removed in PHP 7.0.0]. *mysqli_real_escape_string Escapes special characters in a string for use in an SQL statement, taking into account the current charset of the connection. But recommended to use Prepared Statements because they are not simply escaped strings, a statement comes up with a complete query execution plan, including which tables and indexes it would use, it is a optimized way. *Use single quotes (' ') around your variables inside your query. *Check the variable contains what you are expecting for: * *If you are expecting an integer, use: ctype_digit — Check for numeric character(s);$value = (int) $value;$value = intval($value);$var = filter_var('0755', FILTER_VALIDATE_INT, $options); *For Strings use: is_string() — Find whether the type of a variable is stringUse Filter Function filter_var() — filters a variable with a specified filter:$email = filter_var($email, FILTER_SANITIZE_EMAIL);$newstr = filter_var($str, FILTER_SANITIZE_STRING);more predefined filters *filter_input() — Gets a specific external variable by name and optionally filters it:$search_html = filter_input(INPUT_GET, 'search', FILTER_SANITIZE_SPECIAL_CHARS); *preg_match() — Perform a regular expression match; *Write Your own validation function. A: One trick that can help in the specific circumstance where you have a page like /mypage?id=53 and you use the id in a WHERE clause is to ensure that id definitely is an integer, like so: if (isset($_GET['id'])) { $id = $_GET['id']; settype($id, 'integer'); $result = mysql_query("SELECT * FROM mytable WHERE id = '$id'"); # now use the result } But of course that only cuts out one specific attack, so read all the other answers. (And yes I know that the code above isn't great, but it shows the specific defence.) A: There's no catchall function, because there are multiple concerns to be addressed. * *SQL Injection - Today, generally, every PHP project should be using prepared statements via PHP Data Objects (PDO) as a best practice, preventing an error from a stray quote as well as a full-featured solution against injection. It's also the most flexible & secure way to access your database. Check out (The only proper) PDO tutorial for pretty much everything you need to know about PDO. (Sincere thanks to top SO contributor, @YourCommonSense, for this great resource on the subject.) *XSS - Sanitize data on the way in... * *HTML Purifier has been around a long time and is still actively updated. You can use it to sanitize malicious input, while still allowing a generous & configurable whitelist of tags. Works great with many WYSIWYG editors, but it might be heavy for some use cases. *In other instances, where we don't want to accept HTML/Javascript at all, I've found this simple function useful (and has passed multiple audits against XSS): /* Prevent XSS input */ function sanitizeXSS () { $_GET = filter_input_array(INPUT_GET, FILTER_SANITIZE_STRING); $_POST = filter_input_array(INPUT_POST, FILTER_SANITIZE_STRING); $_REQUEST = (array)$_POST + (array)$_GET + (array)$_REQUEST; } *XSS - Sanitize data on the way out... unless you guarantee the data was properly sanitized before you add it to your database, you'll need to sanitize it before displaying it to your user, we can leverage these useful PHP functions: * *When you call echo or print to display user-supplied values, use htmlspecialchars unless the data was properly sanitized safe and is allowed to display HTML. *json_encode is a safe way to provide user-supplied values from PHP to Javascript *Do you call external shell commands using exec() or system() functions, or to the backtick operator? If so, in addition to SQL Injection & XSS you might have an additional concern to address, users running malicious commands on your server. You need to use escapeshellcmd if you'd like to escape the entire command OR escapeshellarg to escape individual arguments. A: It's a common misconception that user input can be filtered. PHP even has a (now deprecated) "feature", called magic-quotes, that builds on this idea. It's nonsense. Forget about filtering (or cleaning, or whatever people call it). What you should do, to avoid problems, is quite simple: whenever you embed a a piece of data within a foreign code, you must treat it according to the formatting rules of that code. But you must understand that such rules could be too complicated to try to follow them all manually. For example, in SQL, rules for strings, numbers and identifiers are all different. For your convenience, in most cases there is a dedicated tool for such an embedding. For example, when you need to use a PHP variable in the SQL query, you have to use a prepared statement, that will take care of all the proper formatting/treatment. Another example is HTML: If you embed strings within HTML markup, you must escape it with htmlspecialchars. This means that every single echo or print statement should use htmlspecialchars. A third example could be shell commands: If you are going to embed strings (such as arguments) to external commands, and call them with exec, then you must use escapeshellcmd and escapeshellarg. Also, a very compelling example is JSON. The rules are so numerous and complicated that you would never be able to follow them all manually. That's why you should never ever create a JSON string manually, but always use a dedicated function, json_encode() that will correctly format every bit of data. And so on and so forth ... The only case where you need to actively filter data, is if you're accepting preformatted input. For example, if you let your users post HTML markup, that you plan to display on the site. However, you should be wise to avoid this at all cost, since no matter how well you filter it, it will always be a potential security hole. A: What you are describing here is two separate issues: * *Sanitizing / filtering of user input data. *Escaping output. 1) User input should always be assumed to be bad. Using prepared statements, or/and filtering with mysql_real_escape_string is definitely a must. PHP also has filter_input built in which is a good place to start. 2) This is a large topic, and it depends on the context of the data being output. For HTML there are solutions such as htmlpurifier out there. as a rule of thumb, always escape anything you output. Both issues are far too big to go into in a single post, but there are lots of posts which go into more detail: Methods PHP output Safer PHP output A: PHP filter extension has many of the functions needed for checking the externaluser input & it is designed for making data sanitization easier and quicker. PHP filters can comfortably sanitize & validate the external input.
{ "language": "en", "url": "https://stackoverflow.com/questions/129677", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1257" }
Q: Is duplicated code more tolerable in unit tests? I ruined several unit tests some time ago when I went through and refactored them to make them more DRY--the intent of each test was no longer clear. It seems there is a trade-off between tests' readability and maintainability. If I leave duplicated code in unit tests, they're more readable, but then if I change the SUT, I'll have to track down and change each copy of the duplicated code. Do you agree that this trade-off exists? If so, do you prefer your tests to be readable, or maintainable? A: I agree. The trade off exists but is different in different places. I'm more likely to refactor duplicated code for setting up state. But less likely to refactor the part of the test that actually exercises the code. That said, if exercising the code always takes several lines of code then I might think that is a smell and refactor the actual code under test. And that will improve readability and maintainability of both the code and the tests. A: Duplicated code is a smell in unit test code just as much as in other code. If you have duplicated code in tests, it makes it harder to refactor the implementation code because you have a disproportionate number of tests to update. Tests should help you refactor with confidence, rather than be a large burden that impedes your work on the code being tested. If the duplication is in fixture set up, consider making more use of the setUp method or providing more (or more flexible) Creation Methods. If the duplication is in the code manipulating the SUT, then ask yourself why multiple so-called “unit” tests are exercising the exact same functionality. If the duplication is in the assertions, then perhaps you need some Custom Assertions. For example, if multiple tests have a string of assertions like: assertEqual('Joe', person.getFirstName()) assertEqual('Bloggs', person.getLastName()) assertEqual(23, person.getAge()) Then perhaps you need a single assertPersonEqual method, so that you can write assertPersonEqual(Person('Joe', 'Bloggs', 23), person). (Or perhaps you simply need to overload the equality operator on Person.) As you mention, it is important for test code to be readable. In particular, it is important that the intent of a test is clear. I find that if many tests look mostly the same, (e.g. three-quarters of the lines the same or virtually the same) it is hard to spot and recognise the significant differences without carefully reading and comparing them. So I find that refactoring to remove duplication helps readability, because every line of every test method is directly relevant to the purpose of the test. That's much more helpful for the reader than a random combination of lines that are directly relevant, and lines that are just boilerplate. That said, sometimes tests are exercising complex situations that are similiar but still significantly different, and it is hard to find a good way to reduce the duplication. Use common sense: if you feel the tests are readable and make their intent clear, and you're comfortable with perhaps needing to update more than a theoretically minimal number of tests when refactoring the code invoked by the tests, then accept the imperfection and move on to something more productive. You can always come back and refactor the tests later, when inspiration strikes! A: Jay Fields coined the phrase that "DSLs should be DAMP, not DRY", where DAMP means descriptive and meaningful phrases. I think the same applies to tests, too. Obviously, too much duplication is bad. But removing duplication at all costs is even worse. Tests should act as intent-revealing specifications. If, for example, you specify the same feature from several different angles, then a certain amount of duplication is to be expected. A: Implementation code and tests are different animals and factoring rules apply differently to them. Duplicated code or structure is always a smell in implementation code. When you start having boilerplate in implementation, you need to revise your abstractions. On the other hand, testing code must maintain a level of duplication. Duplication in test code achieves two goals: * *Keeping tests decoupled. Excessive test coupling can make it hard to change a single failing test that needs updating because the contract has changed. *Keeping the tests meaningful in isolation. When a single test is failing, it must be reasonably straightforward to find out exactly what it is testing. I tend to ignore trivial duplication in test code as long as each test method stays shorter than about 20 lines. I like when the setup-run-verify rhythm is apparent in test methods. When duplication creeps up in the "verify" part of tests, it is often beneficial to define custom assertion methods. Of course, those methods must still test a clearly identified relation that can be made apparent in the method name: assertPegFitsInHole -> good, assertPegIsGood -> bad. When test methods grow long and repetitive I sometimes find it useful to define fill-in-the-blanks test templates that take a few parameters. Then the actual test methods are reduced to a call to the template method with the appropriate parameters. As for a lot of things in programming and testing, there is no clear-cut answer. You need to develop a taste, and the best way to do so is to make mistakes. A: "refactored them to make them more DRY--the intent of each test was no longer clear" It sounds like you had trouble doing the refactoring. I'm just guessing, but if it wound up less clear, doesn't that mean you still have more work to do so that you have reasonably elegant tests which are perfectly clear? That's why tests are a subclass of UnitTest -- so you can design good test suites that are correct, easy to validate and clear. In the olden times we had testing tools that used different programming languages. It was hard (or impossible) to design pleasant, easy-to-work with tests. You have the full power of -- whatever language you're using -- Python, Java, C# -- so use that language well. You can achieve good-looking test code that's clear and not too redundant. There's no trade-off. A: I LOVE rspec because of this: It has 2 things to help - * *shared example groups for testing common behaviour. you can define a set of tests, then 'include' that set in your real tests. *nested contexts. you can essentially have a 'setup' and 'teardown' method for a specific subset of your tests, not just every one in the class. The sooner that .NET/Java/other test frameworks adopt these methods, the better (or you could use IronRuby or JRuby to write your tests, which I personally think is the better option) A: I feel that test code requires a similar level of engineering that would normally be applied to production code. There can certainly be arguments made in favor of readability and I would agree that's important. In my experience, however, I find that well-factored tests are easier to read and understand. If there's 5 tests that each look the same except for one variable that's changed and the assertion at the end, it can be very difficult to find what that single differing item is. Similarly, if it is factored so that only the variable that's changing is visible and the assertion, then it's easy to figure out what the test is doing immediately. Finding the right level of abstraction when testing can be difficult and I feel it is worth doing. A: Readability is more important for tests. If a test fails, you want the problem to be obvious. The developer shouldn't have to wade through a lot of heavily factored test code to determine exactly what failed. You don't want your test code to become so complex that you need to write unit-test-tests. However, eliminating duplication is usually a good thing, as long as it doesn't obscure anything, and eliminating the duplication in your tests may lead to a better API. Just make sure you don't go past the point of diminishing returns. A: I don't think there is a relation between more duplicated and readable code. I think your test code should be as good as your other code. Non-repeating code is more readable then duplicated code when done well. A: Ideally, unit tests shouldn't change much once they are written so I would lean towards readability. Having unit tests be as discrete as possible also helps to keep the tests focused on the specific functionality that they are targeting. With that said, I do tend to try and reuse certain pieces of code that I wind up using over and over, such as setup code that is exactly the same across a set of tests. A: You can reduce repetition using several different flavours of test utility methods. I'm more tolerant of repetition in test code than in production code, but I have been frustrated by it sometimes. When you change a class's design and you have to go back and tweak 10 different test methods that all do the same setup steps, it's frustrating.
{ "language": "en", "url": "https://stackoverflow.com/questions/129693", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "143" }
Q: Java: Serializing a huge amount of data to a single file I need to serialize a huge amount of data (around 2gigs) of small objects into a single file in order to be processed later by another Java process. Performance is kind of important. Can anyone suggest a good method to achieve this? A: Have you taken a look at google's protocol buffers? Sounds like a use case for it. A: I don't know why Java Serialization got voted down, it's a perfectly viable mechanism. It's not clear from the original post, but is all 2G of data in the heap at the same time? Or are you dumping something else? Out of the box, Serialization isn't the "perfect" solution, but if you implement Externalizable on your objects, Serialization can work just fine. Serializations big expense is figuring out what to write and how to write it. By implementing Externalizable, you take those decisions out of its hands, thus gaining quite a boost in performance, and a space savings. While I/O is a primary cost of writing large amounts of data, the incidental costs of converting the data can also be very expensive. For example, you don't want to convert all of your numbers to text and then back again, better to store them in a more native format if possible. ObjectStream has methods to read/write the native types in Java. If all of your data is designed to be loaded in to a single structure, you could simply do ObjectOutputStream.writeObject(yourBigDatastructure), after you've implemented Externalizable. However, you could also iterate over your structure and call writeObject on the individual objects. Either way, you're going to need some "objectToFile" routine, perhaps several. And that's effectively what Externalizable provides, as well as a framework to walk your structure. The other issue, of course, is versioning, etc. But since you implement all of the serialization routines yourself, you have full control over that as well. A: Have you tried java serialization? You would write them out using an ObjectOutputStream and read 'em back in using an ObjectInputStream. Of course the classes would have to be Serializable. It would be the low effort solution and, because the objects are stored in binary, it would be compact and fast. A: A simplest approach coming immediately to my mind is using memory-mapped buffer of NIO (java.nio.MappedByteBuffer). Use the single buffer (approximately) corresponding to the size of one object and flush/append them to the output file when necessary. Memory-mapped buffers are very effecient. A: protocol buffers : makes sense. here's an excerpt from their wiki : http://code.google.com/apis/protocolbuffers/docs/javatutorial.html Getting More Speed By default, the protocol buffer compiler tries to generate smaller files by using reflection to implement most functionality (e.g. parsing and serialization). However, the compiler can also generate code optimized explicitly for your message types, often providing an order of magnitude performance boost, but also doubling the size of the code. If profiling shows that your application is spending a lot of time in the protocol buffer library, you should try changing the optimization mode. Simply add the following line to your .proto file: option optimize_for = SPEED; Re-run the protocol compiler, and it will generate extremely fast parsing, serialization, and other code. A: I developped JOAFIP as database alternative. A: Apache Avro might be also usefull. It's designed to be language independent and has bindings for the popular languages. Check it out. A: You should probably consider a database solution--all databases do is optimize their information, and if you use Hibernate, you keep your object model as is and don't really even think about your DB (I believe that's why it's called hibernate, just store your data off, then bring it back) A: If performance is very importing then you need write it self. You should use a compact binary format. Because with 2 GB the disk I/O operation are very important. If you use any human readable format like XML or other scripts you resize the data with a factor of 2 or more. Depending on the data it can be speed up if you compress the data on the fly with a low compression rate. A total no go is Java serialization because on reading Java check on every object if it is a reference to an existing object.
{ "language": "en", "url": "https://stackoverflow.com/questions/129695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Parse Fast Infoset documents in PHP? Is there a library which allows PHP to decode application/fastinfoset binary XML? A: As far as I know, there's no PHP-library that does this. Can you hack around this by creating a tiny java-program that decodes/transforms the FI and call this from PHP? I know this is a less-than-ideal solution, but this does seem to be uncharted territory. https://fi.dev.java.net/how-to-use.html has some java-examples on how to handle FI. As for bridging PHP and Java; http://sourceforge.net/projects/php-java-bridge is supposedly good (though, site is down when I try), http://www.php.net/manual/en/book.java.php also have som information on integrating Java and PHP. Alternatively, you can use probably use webservice or messaging to communicate between PHP and Java. (This is probably obvious.)
{ "language": "en", "url": "https://stackoverflow.com/questions/129731", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Any good collection module in perl? Can someone suggest a good module in perl which can be used to store collection of objects? Or is ARRAY a good enough substitute for most of the needs? Update: I am looking for a collections class because I want to be able to do an operation like compute collection level property from each element. Since I need to perform many such operations, I might as well write a class which can be extended by individual objects. This class will obviously work with arrays (or may be hashes). A: There are collection modules for more complex structures, but it is common style in Perl to use Arrays for arrays, stacks and lists. Perl has built in functions for using the array as a stack or list : push/pop, shift/unshift, splice (inserting or removing in the middle) and the foreach form for iteration. Perl also has a map, called a hashmap which is the equivalent to a Dictionary in Python - allowing you to have an association between a single key and a single value. Perl developers often compose these two data-structures to build what they need - need multiple values? Store array-references in the value part of the hashtable (Map). Trees can be built in a similar manner - if you need unique keys, use multiple-levels of hashmaps, or if you don't use nested array references. These two primitive collection types in Perl don't have an Object Oriented api, but they still are collections. If you look on CPAN you'll likely find modules that provide other Object Oriented data structures, it really depends on your need. Is there a particular data structure you need besides a List, Stack or Map? You might get a more precise answer (eg a specific module) if you're asking about a particular data structure. Forgot to mention, if you're looking for small code examples across a variety of languages, PLEAC (Programming Language Examples Alike Cookbook) is a decent resource. A: I would second Michael Carman's comment: please do not use the term "Hashmap" or "map" when you mean a hash or associative array. Especially when Perl has a map function; that just confuses things. Having said that, Kyle Burton's response is fundamentally sound: either a hash or an array, or a complex structure composed of a mixture of the two, is usually enough. Perl groks OO, but doesn't enforce it; chances are that a loosely-defined data structure may be good enough for what you need. Failing that, please define more exactly what you mean by "compute collection level property from each element". And bear in mind that Perl has keywords like map and grep that let you do functional programming things like e.g. my $record = get_complex_structure(); # $record = { # 'widgets' => { # name => 'ACME Widgets', # skus => [ 'WIDG01', 'WIDG02', 'WIDG03' ], # sales => { # WIDG01 => { num => 25, value => 105.24 }, # WIDG02 => { num => 10, value => 80.02 }, # WIDG03 => { num => 8, value => 205.80 }, # }, # }, # ### and so on for 'grommets', 'nuts', 'bolts' etc. # } my @standouts = map { $_->[0] } sort { $b->[2] <=> $a->[2] || $b->[1] <=> $a->[1] || $record->{$a->[0]}->{name} cmp $record->{$b->[0]}->{name} } map { my ($num, $value); for my $sku (@{$record->{$_}{skus}}) { $num += $record->{$_}{sales}{$sku}{num}; $value += $record->{$_}{sales}{$sku}{value}; } [ $_, $num, $value ]; } keys %$record; Reading from back to front, this particular Schwarztian transform does three things: 3) It takes a key to $record, goes through the SKUs defined in this arbitrary structure, and works out the aggregate number and total value of transactions. It returns an anonymous array containing the key, the number of transactions and the total value. 2) The next block takes in a number of arrayrefs and sorts them a) first of all by comparing the total value, numerically, in descending orders; b) if the values are equal, by comparing the number of transactions, numerically in descending order; and c) if that fails, by sorting asciibetically on the name associated with this order. 1) Finally, we take the key to $record from the sorted data structure, and return that. It may well be that you don't need to set up a separate class to do what you want. A: I would normally use an @array or a %hash. What features are you looking for that aren't provided by those? A: Base your decision on how you need to access the objects. If pushing them onto an array, indexing into, popping/shifting them off works, then use an array. Otherwise hash them by some key or organize them into a tree of objects that meets your needs. A hash of objects is a very simple, powerful, and highly-optimized way of doing things in Perl. A: Since Perl arrays can easily be appended to, resized, sorted, etc., they are good enough for most "collection" needs. In cases where you need something more advanced, a hash will generally do. I wouldn't recommend that you go looking for a collection module until you actually need it. A: i would stick with an ARRAY or a HASH. @names = ('Paul','Michael','Jessica','Megan'); and my %petsounds = ("cat" => "meow", "dog" => "woof", "snake" => "hiss"); source A: Either an array or a hash can store a collection of objects. A class might be better if you want to work with the class in certain ways but you'd have to tell us what those ways are before we could make any good recommendations. A: It depends a lot; there's Sparse Matrix modules, some forms of persistence, a new style of OO etc Most people just man perldata, perllol, perldsc to answer their specific issue with a data structure.
{ "language": "en", "url": "https://stackoverflow.com/questions/129740", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: NPE in JBossWS on JBoss 4.2.2 with jmxremote enabled I am trying to set up JBoss 4.2.2 and JConsole for remote monitoring. As per many of the how-to's I have found on the web to do this you need to enable jmxremote by setting the following options in run.conf. (I realize the other two opts disable authentication) JAVA_OPTS="$JAVA_OPTS -Dcom.sun.management.jmxremote.port=11099" JAVA_OPTS="$JAVA_OPTS -Dcom.sun.management.jmxremote.authenticate=false" JAVA_OPTS="$JAVA_OPTS -Dcom.sun.management.jmxremote.ssl=false" Which results in the following exception: 13:06:56,418 INFO [TomcatDeployer] performDeployInternal :: deploy, ctxPath=/services, warUrl=.../tmp/deploy/tmp34585xxxxxxxxx.ear-contents/mDate-Services-exp.war/ 13:06:57,706 WARN [AbstractServerConfig] getWebServicePort :: Unable to calculate 'WebServicePort', using default '8080' 13:06:57,711 WARN [AbstractServerConfig] getWebServicePort :: Unable to calculate 'WebServicePort', using default '8080' 13:06:58,070 WARN [AbstractServerConfig] getWebServicePort :: Unable to calculate 'WebServicePort', using default '8080' 13:06:58,071 WARN [AbstractServerConfig] getWebServicePort :: Unable to calculate 'WebServicePort', using default '8080' 13:06:58,138 ERROR [MainDeployer] start :: Could not start deployment: file:/opt/jboss-4.2.2.GA/server/default/tmp/deploy/tmp34585xxxxxxxxx.ear-contents/xxxxx-Services.war java.lang.NullPointerException at org.jboss.wsf.stack.jbws.WSDLFilePublisher.getPublishLocation(WSDLFilePublisher.java:303) at org.jboss.wsf.stack.jbws.WSDLFilePublisher.publishWsdlFiles(WSDLFilePublisher.java:103) at org.jboss.wsf.stack.jbws.PublishContractDeploymentAspect.create(PublishContractDeploymentAspect.java:52) at org.jboss.wsf.framework.deployment.DeploymentAspectManagerImpl.deploy(DeploymentAspectManagerImpl.java:115) at org.jboss.wsf.container.jboss42.ArchiveDeployerHook.deploy(ArchiveDeployerHook.java:97) ... My application uses JWS which according to this bug: https://jira.jboss.org/jira/browse/JBWS-1943 Suggests this workaround: JAVA_OPTS="$JAVA_OPTS -Djavax.management.builder.initial=org.jboss.system.server.jmx.MBeanServerBuilderImpl" JAVA_OPTS="$JAVA_OPTS -Djboss.platform.mbeanserver" JAVA_OPTS="$JAVA_OPTS -Dcom.sun.management.jmxremote" (https://developer.jboss.org/wiki/JBossWS-FAQ#jive_content_id_How_to_use_JDK_JMX_JConsole_with_JBossWS) I've tried that however that then throws the following exception while trying to deploy a sar file in my ear which only contains on class which implements Schedulable for a couple of scheduled jobs my application requires: Caused by: java.lang.NullPointerException at EDU.oswego.cs.dl.util.concurrent.ConcurrentReaderHashMap.hash(ConcurrentReaderHashMap.java:298) at EDU.oswego.cs.dl.util.concurrent.ConcurrentReaderHashMap.get(ConcurrentReaderHashMap.java:410) at org.jboss.mx.server.registry.BasicMBeanRegistry.getMBeanMap(BasicMBeanRegistry.java:959) at org.jboss.mx.server.registry.BasicMBeanRegistry.contains(BasicMBeanRegistry.java:577) Any suggestions on where to go from here? EDIT: I have also tried the following variation: JAVA_OPTS="$JAVA_OPTS -DmbipropertyFile=../server/default/conf/mbi.properties -DpropertyFile=../server/default/conf/mdate.properties -Dwicket.configuration=DEVELOPMENT" JAVA_OPTS="$JAVA_OPTS -Djavax.management.builder.initial=org.jboss.system.server.jmx.MBeanServerBuilderImpl" JAVA_OPTS="$JAVA_OPTS -Djboss.platform.mbeanserver" JAVA_OPTS="$JAVA_OPTS -Dcom.sun.management.jmxremote" I'm using JDK 1.6.0_01-b06 A: I have honestly never tried this remoting approach. But, if both your client machine and the server happen to both be linux boxes or similar *nixes with SSH, then you can ssh -XCA to the server and start JConsole on the server and have the GUI display on your client machine with X port forwarding. A JConsole running locally to the server JVM you want to monitor should not have any trouble connecting. I personally think that's a nifty trick but I realize that it dosn't really solve the problem of getting JConsole to connect remotely through JWS. A: First thing I would do is to delete both /tmp and /work directories under JBoss /default and redeploy the WAR. If that doesn't, I would upgrade the JDK to use a more recent version of 1.6. 1.6.0_01 is pretty old. A: I'm not sure if there's a specific reason you're trying to use WS to access the mbean server, but with JConsole you can directly access a remote JVM. To do this use "service:jmx:rmi:///jndi/rmi://<remote-machine>:<port>/jmxrmi" (where <remote-machine> is whatever machine your trying to connect to and <port> is 11099) as the remote process. I have used this to connect to any 1.6 JVM that exposes an mbean server (JBoss, ActiveMQ, etc). A: I don't know if this is related, but JBoss has a tendency to redirect to itself. If you connect to a host, say jboss.localdomain:3873, wanting to connect to a ejb, JBoss might lookup its own hostname and redirect to the address it gets from there. If you have a public hostname, it might find that instead (say jboss.publicdomain.com), and tell the client to reconnect to jboss.publicdomain.com:1099. Depending on your DNS, this might or might not be a reachable address from your client. There are various varations of this problem, and as a bonus, sometimes the initial "connection check" works, so the client app deploys, but fails later on connect. A: Had a similar issue, but with JBoss Seam: take a look at JBSEAM-4029. As one of the workarounds it suggests to override the class running into the NPE - in Seam's case the JBossClusterMonitor. I bet the JWS code is running into exact same issue, i.e. ending up calling MBeanServerFactory.findMBeanServer(null) at some point in time. The stack trace should reveal which exact class does this.
{ "language": "en", "url": "https://stackoverflow.com/questions/129746", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to begin WPF development? I've been using Winforms since .NET 1.1 and I want to start learning WPF. I'm looking for some good resources for a beginner in WPF. What should I read, what tools do I need, and what are the best practices I should follow? A: 1 Start understanding the XAML and control heirarchies - UI markup and the new terms and features around it. KaXaml is a great tool to learn XAML, It is free to download http://www.kaxaml.com/ 2 Since you have already got a long .NET experience, go directly to the SDK Samples and start running in it and see what is happeing, play with XAML. http://msdn.microsoft.com/en-us/library/ms771449.aspx 3 If you are looking for Blog resources here is my best suggestion * *Josh Smith - http://joshsmithonwpf.wordpress.com/ *Dr. WPF - http://www.drwpf.com/blog/ But selecting a simple UI scenario which you already implemented or saw somewhere and try implementing it in WPF - That is probably the best approach to learn a new technology. And please dont get afraid of MVVM , those things will come handy later once you got familiar with WPF platform and XAML. A: I'd recommend the book Windows Presentation Foundation Unleashed by Adam Nathan Then I'd recommend you write an application. Like every other dev environment, there are no perfect guidelines. You have to find the ones that make the most sense for your circumstance. The only way to do that is to just start coding. As for tools, Visual Studio 2008 [Express] is your best bet. Or you might be able to limp along with XamlPad. A: Adam Nathan's - WPF Unleashed, book is very good. A: I would also highly recommend using Blend together with VS 2008. Blend is great for creating animations. The Blend 2.5 Preview can be freely downloaded. I like the Designer WPF Blog, which has some good tutorials on how to do WPF stuff in Blend. A: Although already listed above, I wanted to reiterate one point. Kaxaml is bar none, the best loose xaml editor out there. It has a snippet library, IntelliSense, split view, a xaml scrubber (pretty print), and more. I only wish we could hook up some assemblies (that you could reference from the xaml) ... Robby Ingebretsen, you rock. A: I would humbly also suggest taking a look at my blog, 2,000 Things You Should Know About WPF, where I post a single piece of information on WPF each day. The blog starts with first principles and gradually works into more advanced topics, so it's a good place to start, as a beginner. A: Mastering WPF (and silverlight, and basically any vector based XAML .net rich UI framework) requires more than understanding the new development concepts (and there are many). Its not enough to fully understand dependency properties, attached properties, templates, data binding, styles, MVVM, the layout mechanism, visual states and parts, effects, routed events... To really know your way around, you need to understand some basic concepts in graphics (such as vector graphics, raster graphics, rendering, layered graphics techniques, animation, pixel shaders, gradients, geometries, paths, brushes, transformation matrices, etc). In addition to that, you need to learn and understand M-V-VM which is not just a new design pattern - its a whole new programming paradigm. So there is a lot to learn... and the problem is that no matter which starting point you pick, you always feel that something is missing. I tried several books as a starting point and many of them got me quite confused. Then I found "Illustrated WPF" by Daniel M. Solis and this one did the trick for me. He explains concepts from the world of graphics in a way that is clear to developers, and then teaches all the new concepts of XAML based UI while lightly touching each topic and diving into specific topics through a demo. Simply by following the tutorials, you find that you have learned a lot, and more importantly, removed the fear factor. Once you master that, you can move on to "WPF Unleashed" by Adam Nathan and dive deeper. This one gives you a much more In-Depth view of the concepts that are unique to WPF, which I believe you have a much better chance of understanding once you have seen each feature at least once. They somehow all complete eachother and only make sense together. You will still have tons to learn after that, but at this point you can develop rich applications and learn new topics as you go... Enjoy :-) A: Please have a look at this StackOverflow post, which has a list of book recommendations. In terms of best practices, get familiar with the M-V-VM pattern. It seems to have gained the most traction in WPF-land. Check out this post for what tools you can use for WPF development. The MSDN Forum is a great place for resources, as is the MSDN help files on WPF. My personal recommendation is for you to forget everything you have learnt about WinForms. WPF is a totally different model, and once I finally dropped my "I did it this way in WinForms, but that way doesn't work in WPF" I had one of those "lightbulb" moments. Hope this helps! A: Visual Studio 2008 (there's a free Express version). That's all the tools you need. Then try some How-to videos. Here's a good start: http://windowsclient.net/learn/videos_wpf.aspx A: Microsoft actually has a decent introduction on MSDN: http://msdn.microsoft.com/en-us/library/aa970268.aspx A: The learning curve is high, but there are a lot of really good resources out there. And, the MSDN documentation and SDK samples (as some have already mentioned) are really good. One thing that will help you though, is just to acknowledge the learning curve up front, and to not get discouraged when it doesn't make sense. There really are a lot of concepts to 'grok' before you can do some even basic things. The WPF books already mentioned are all valuable in their own way. My personal experience was that I got a copy of WPF Unleashed first and tried reading it to no avail. It wasn't until I picked up Charles Petzold's Application = Code + Markup and read through some of that ... before I could even begin to understand WPF Unleashed. However, my brain needs detail before concepts actually sink in ... Tim Sneath has an excellent list of WPF bloggers that I have found valuable to get hooked into the WPF community: WPF Bloggers A few blogs on my must read list: * *Rob Relyea *Dr. WPF *Josh Smith *Robby Ingebretsen *Kevin Moore *Charles Petzold *Pavan Podila Another thing I would do is get Dr. WPF's snippet library (located here). This is an extremely good way to learn some of the basic plumbing type concepts like Dependency Properties, Routed Events, and Routed Commands. Finally, I would get a copy of Blend (v2.5 is still in beta and free) and use that to generate xaml and then dive into that generated xaml to understand what you did in Blend, maps to the WPF API. Hope this helps. Good luck. A: Teach Yourself WPF in 24 Hours http://ecx.images-amazon.com/images/I/41ZM9hbeGoL._BO2,204,203,200_PIsitb-sticker-arrow-click,TopRight,35,-76_AA240_SH20_OU01_.jpg http://Teach Yourself WPF in 24 Hours A: One resource I found that really helped me was from jfo's coding: http://blogs.msdn.com/jfoscoding/articles/765135.aspx The document is entitled "WPF for those who know WinForms", which is exactly the position I was in last year! A: * *Microsoft has a college concept and documentation is .net focused; *focus must remain to the basic language; [arrays,classes,...]; which means by not going into .net structures(others built in way of doing); *xaml must be forgotten, and interface build by code; *having your personal coding structured and learned, try researching [if] still into xaml and .net structures; *computers are now a social realm propaganda, and internet for new devices; building cool desktop applications are led by memes; *making a blog would be the though left out of the focus, there is also windows store now;
{ "language": "en", "url": "https://stackoverflow.com/questions/129772", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40" }
Q: NHibernate : map to fields or properties? When you create your mapping files, do you map your properties to fields or properties : <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="Foo" namespace="Foo.Bar" > <class name="Foo" table="FOOS" batch-size="100"> [...] <property name="FooProperty1" access="field.camelcase" column="FOO_1" type="string" length="50" /> <property name="FooProperty2" column="FOO_2" type="string" length="50" /> [...] </class> </hibernate-mapping> Of course, please explain why :) Usually, I map to properties, but mapping to fields can enable to put some "logic" in the getters/setters of the properties. Is it "bad" to map to fields ? Is there a best practice ? A: I map to properties. If I find it necessary, I map the SETTER to a field. (usually via something like "access=field.camelcase"). This lets me have nice looking Queries, e.g. "from People Where FirstName = 'John'" instead of something like "from People Where firstName/_firstName" and also avoid setter logic when hydrating my entities. A: Properties are also useful in the case you need to do something funky with the data as it comes in and out of persistant storage. This should generally be avoided, but some rare business cases or legacy support sometimes calls for this. (Just remember that if you somehow transform the data when it comes back out with the getter, NHibernate will (by default) use the return from the getter and save it that way back to the database when the Session is flushed/closed. Make sure that is what you want.) A: I map to properties, I haven't come across the situation where I would map to a field... and when I have I augment my B.O. design for the need. I think it allows for better architecture. A: I map to properties because I use automatic properties. Except for collections (like sets. Those I map to fields (access="field.camelcase-underscore") because I don't have public properties exposing them, but methods. A: Null Objects Mapping to fields can be useful if you are implementing the null object pattern if your classes. As this cannot be performed (easily) when mapping to Properties. You end up having to store fake objects in the database. HQL I was unsure that with HQL queries you had to change the property names if you were using a field access approach. ie if you had <property name="FirstName" access="field.camelcase" /> I thought you could still write "From Person where FirstName = :name"; as it would use the property name still. Further discussion on field strategies and Null object can be found here. Performance In relation to performance of field vs property on John Chapman's blog It appears there isn't much of an issue in performance with small-midrange result sets. In summary, each approach has certain perks that may be useful depending on the scenario (field access allows readonly getters, no need for setters, property access works when nothing special is required from your poco and seems to be the defacto approach. etc) A: I tend to agree with the answers above. Generally, map to properties for almost everything, then map to fields for collection setters. The only other place you'd want to map to fields is when you have something: public class AuditableEntity { /*...*/ DateTime creationDate = DateTime.Now; /*...*/ public DateTime CreationDate { get { return creationDate; } } } A: I map directly to fields, which allows me to use the property setters to keep track of a property's dirty state.
{ "language": "en", "url": "https://stackoverflow.com/questions/129773", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: How can I schedule a job in Sql Agent (Sql Server 2005) via C# code? Hi I'd like to schedule an existing job in the Sql Server 2005 agent via C# code... i.e. when someone clicks a button on an asp.net web page. How can I do this? Thanks! A: Have a look here: SMO Job Class The SQL Server Management Objects (SMO) Class Library lets you do practically anything programmatically in SQL Server. A: Check out: http://msdn.microsoft.com/en-us/library/ms186273(SQL.90).aspx Covers both SMO and T-SQL methods.
{ "language": "en", "url": "https://stackoverflow.com/questions/129808", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Code or formula for intersection of two parabolas in any rotation I am working on a geometry problem that requires finding the intersection of two parabolic arcs in any rotation. I was able to intesect a line and a parabolic arc by rotating the plane to align the arc with an axis, but two parabolas cannot both align with an axis. I am working on deriving the formulas, but I would like to know if there is a resource already available for this. A: I'd first define the equation for the parabolic arc in 2D without rotations: x(t) = ax² + bx + c y(t) = t; You can now apply the rotation by building a rotation matrix: s = sin(angle) c = cos(angle) matrix = | c -s | | s c | Apply that matrix and you'll get the rotated parametric equation: x' (t) = x(t) * c - s*t; y' (t) = x(t) * s + c*t; This will give you two equations (for x and y) of your parabolic arcs. Do that for both of your rotated arcs and subtract them. This gives you an equation like this: xa'(t) = rotated equation of arc1 in x ya'(t) = rotated equation of arc1 in y. xb'(t) = rotated equation of arc2 in x yb'(t) = rotated equation of arc2 in y. t1 = parametric value of arc1 t2 = parametric value of arc2 0 = xa'(t1) - xb'(t2) 0 = ya'(t1) - yb'(t2) Each of these equation is just a order 2 polynomial. These are easy to solve. To find the intersection points you solve the above equation (e.g. find the roots). You'll get up to two roots for each axis. Any root that is equal on x and y is an intersection point between the curves. Getting the position is easy now: Just plug the root into your parametric equation and you can directly get x and y. A: Unfortunately, the general answer requires solution of a fourth-order polynomial. If we transform coordinates so one of the two parabolas is in the standard form y=x^2, then the second parabola satisfies (ax+by)^2+cx+dy+e==0. To find the intersection, solve both simultaneously. Substituting in y=x^2 we see that the result is a fourth-order polynomial: (ax+bx^2)^2+cx+dx^2+e==0. Nils solution therefore won't work (his mistake: each one is a 2nd order polynomial in each variable separately, but together they're not). A: It's easy if you have a CAS at hand. See the solution in Mathematica. Choose one parabola and change coordinates so its equation becomes y(x)=a x^2 (Normal form). The other parabola will have the general form: A x^2 + B x y + CC y^2 + DD x + EE y + F == 0 where B^2-4 A C ==0 (so it's a parabola) Let's solve a numeric case: p = {a -> 1, A -> 1, B -> 2, CC -> 1, DD -> 1, EE -> -1, F -> 1}; p1 = {ToRules@N@Reduce[ (A x^2 + B x y + CC y^2 + DD x + EE y +F /. {y -> a x^2 } /. p) == 0, x]} {{x -> -2.11769}, {x -> -0.641445}, {x -> 0.379567- 0.76948 I}, {x -> 0.379567+ 0.76948 I}} Let's plot it: Show[{ Plot[a x^2 /. p, {x, -10, 10}, PlotRange -> {{-10, 10}, {-5, 5}}], ContourPlot[(A x^2 + B x y + CC y^2 + DD x + EE y + F /. p) == 0, {x, -10, 10}, {y, -10, 10}], Graphics[{ PointSize[Large], Pink, Point[{x, x^2} /. p /. p1[[1]]], PointSize[Large], Pink, Point[{x, x^2} /. p /. p1[[2]]] }]}] The general solution involves calculating the roots of: 4 A F + 4 A DD x + (4 A^2 + 4 a A EE) x^2 + 4 a A B x^3 + a^2 B^2 x^4 == 0 Which is done easily in any CAS.
{ "language": "en", "url": "https://stackoverflow.com/questions/129815", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: telneting backspace key is wrong When I telnet into our works sun station my backspace key doesn't work. I use a temporary workaround of: $ stty erase ^H This works but each time I telnet in I have to retype this. How can I set this to work properly in my .cshrc file? A: You can put stty erase ^H in your .cshrc file. This will fix the problem. A: When loging into a Solaris system ^H would be the default erase key. I assume your friendly administrator changed it to ^? somewhere in the profile files for your shell (have a look with stty -a). A possible reason would be to make Solaris behave more like other systems at this site. Therefore you may want to consider to change the behaviour of your telnet client (send ^? instead of ^H). On a side note - telnet sends all information in the clear, including your username and password. SSH encrypts all communications, does everything telnet does and more. It is commonplace now, even on fairly recent versions of Solaris. A: actually, I've run into multiple levels of this before. X windows sometimes maps DEL to Backspace and vice versa. Sometimes logging into one machine through another machine also does this. Here's a comprehensive look at how to solve this: http://www.ibb.net/~anne/keyboard.html
{ "language": "en", "url": "https://stackoverflow.com/questions/129826", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Slipping podcasts through a filter My workplace filters our internet traffic by forcing us to go through a proxy, and unfortunately sites such as IT Conversations and Libsyn are blocked. However, mp3 files in general are not filtered, if they come from sites not on the proxy's blacklist. So is there a website somewhere that will let me give it a URL and then download the MP3 at that URL and send it my way, thus slipping through the proxy? Alternatively, is there some other easy way for me to get the mp3 files for these podcasts from work? EDIT and UPDATE: Since I've gotten downvoted a few times, perhaps I should explain/justify my situation. I'm a contractor working at a government facility, and we use some commercial filtering software which is very aggressive and overzealous. My boss is fine with me listening to podcasts at work and is fine with me circumventing the proxy filtering, and doesn't want to deal with the significant red tape (it's the government after all) associated with getting the IT department to make an exception for IT Conversations or the Java Posse, etc. So I feel that this is an important and relevant question for programmers. Unfortunately, all of the proxy websites for bypassing web filters have also been blocked, so I may have to download the podcasts I like at home in advance and then bring them into work. If can tell me about a lesser-known service I can try which might not be blocked, I'd appreciate it. A: Can you SSH out? SSH Tunnels are your friend! A: Why not subscribe at home and have your favorite podcasts copied to your mp3 player or a USB drive and just take it to work with you each day and back home in the evening? Then you can listen and your are not circumventing your clients network. A: There are many other Development/Dotnet/Technology podcasts, try one of those. for the blocked sites try an anonymous proxy site, there are plenty out there. A: Since this is work related material, I would recommend opening up a request that the sites in question not be blocked. A: I ended up writing an extremely dumb-and-simple cgi-script and hosting it on my web server, with a script on my work computer to get at it. Here's the CGI script: #!/usr/local/bin/python import cgitb; cgitb.enable() import cgi from urllib2 import urlopen def tohex(data): return "".join(hex(ord(char))[2:].rjust(2,"0") for char in data) def fromhex(encoded): data = "" while encoded: data += chr(int(encoded[:2], 16)) encoded = encoded[2:] return data if __name__=="__main__": print("Content-type: text/plain") print("") url = fromhex( cgi.FieldStorage()["target"].value ) contents = urlopen(url).read() for i in range(len(contents)/40+1): print( tohex(contents[40*i:40*i+40]) ) and here's the client script used to download the podcasts: #!/usr/bin/env python2.6 import os from sys import argv from urllib2 import build_opener, ProxyHandler if os.fork(): exit() def tohex(data): return "".join(hex(ord(char))[2:].rjust(2,"0") for char in data) def fromhex(encoded): data = "" while encoded: data += chr(int(encoded[:2], 16)) encoded = encoded[2:] return data if __name__=="__main__": if len(argv) < 2: print("usage: %s URL [FILENAME]" % argv[0]) quit() os.chdir("/home/courtwright/mp3s") url = "http://example.com/cgi-bin/hex.py?target=%s" % tohex(argv[1]) fname = argv[2] if len(argv)>2 else argv[1].split("/")[-1] with open(fname, "wb") as dest: for line in build_opener(ProxyHandler({"http":"proxy.example.com:8080"})).open(url): dest.write( fromhex(line.strip()) ) dest.flush()
{ "language": "en", "url": "https://stackoverflow.com/questions/129828", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Forms in SharePoint If I want to put a form up on SharePoint, is it easier to use InfoPath or build a custom web part in C#? Are there other options that I should consider? What are the requirements and hurdles for each option? A: Building forms using InfoPath is absolutely the easiest way to publish a form in SharePoint. Notice it has many limitations, and you might find yourself trying to put some problematic logic or need an extra feature. Programming in C# requires C# knowledge (of course) and a knowledge in SharePoint's APIs. Also, once completed, the resulting DLL has to be published and trusted by SharePoint, which requires the sysadmin intervention. This might not always be available to you, and might be problematic the next time you try to upgrade SharePoint. Finally, I recommend trying to accomplish most of the stuff (including forms) simply by using SharePoint's built-in features. If you dive into it a little, you'll find out you can actually build complicated applications simply by customizing Lists' views, arranging the order of fields, adding column (and site-columns) of your own, etc. The best thing about this approach is it's pure SharePoint. No extra knowledge (and people) needed. A: Actually, you don't need much knowledge of the SharePoint APIs in order to create a custom web form. It's a very straight forward process; I don't have any links handy, but there should be more than a few "hello world" examples floating around to get you started. The trickier parts with SharePoint web parts is how to best debug and deploy them. I know some consultants who have a full suite of virtual servers running locally on their laptops so that everything is there for them to play with. That's not an option for me; my group utilizes System.Web.UI.WebControls.WebParts.WebPart, so we can test locally before deploying to our dev environment. Please note that if you go that route, you can't fully test locally, as you'll be missing some SharePoint elements like style sheets and system-provided web parts. As far as deployment, we're still working on the details. You can manually do it, but it's not great for locked down production environments. One approach to review is "Features"; it looks promising for applying new enhancements as a single installer, although I'm not sure how you handle bug fixes for it. A: Can you be more specific about the form and sharepoint version ? It depends on your version of Sharepoint : if you got 2003, if you want to use InfoPath, it has to be installed on the clients as well. On SharePoint 2007, i think it is not required. If it is a large form with few business rule, InfoPath can be the way to go -> wysiwyg, and easy deployment. If your form involves more business rules, a web part or a page in _layouts can be "simpler" and more maintainable. A: Infopath is wicked at quickly building forms and throwing them up onto Sharepoint. I would expect it's 1000x faster than building a custom C# app. A: It may deppends on the way you are thinking for For eg, 1. you can go for Infopath, if you are trying to give the control to end user inorder to customize the form by own. Infopath is very easy to understand, develop. In order to use the infopath you have to learn Infopath, Infopath Form Services, and Sharepoint api to integerate Dotnet (C#) and Sharepoint.
{ "language": "en", "url": "https://stackoverflow.com/questions/129829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to query the name of the current SQL Server database instance? It is a bit of a "chicken or egg" kind of query, but can someone dreamup a query that can return the name of the current database instance in which the query executes? Believe me when I say I understand the paradox: why do you need to know the name of the database instance if you're already connected to execute the query? Auditing in a multi-database environment. I've looked at all the @@ globals in Books Online. "SELECT @@servername" comes close, but I want the name of the database instance rather than the server. A: You can use DB_NAME() : SELECT DB_NAME() A: I'm not sure what you were exactly asking. As you are writing this procedure for an Auditing need I guess you're asking how do you get the current database name when the Stored Procedure exists in another database. e.g. USE DATABASE1 GO CREATE PROC spGetContext AS SELECT DB_NAME() GO USE DATABASE2 GO EXEC DATABASE1..spGetContext /* RETURNS 'DATABASE1' not 'DATABASE2' */ This is the correct behaviour, but not always what you're looking for. To get round this you need to create the SP in the Master database and mark the procedure as a System Procedure. The method of doing this differs between SQL Server versions but here's the method for SQL Server 2005 (it is possible to do in 2000 with the master.dbo.sp_MS_upd_sysobj_category function). USE MASTER /* You must begin function name with sp_ */ CREATE FUNCTION sp_GetContext AS SELECT DB_NAME() GO EXEC sys.sp_MS_marksystemobject sp_GetContext USE DATABASE2 /* Note - no need to reference master when calling SP */ EXEC sp_GetContext /* RETURNS 'DATABASE2' */ Hope this is what you were looking for A: SELECT DB_NAME() Returns the database name. A: SELECT @@servername AS 'Server Name' -- The database server's machine name ,@@servicename AS 'Instance Name' -- e.g.: MSSQLSERVER ,DB_NAME() AS 'Database Name' ,HOST_NAME() AS 'Host Name' -- The database client's machine name A: SELECT DB_NAME() AS DatabaseName A: simply use: select @@servicename A: You should be able to use: SELECT SERVERPROPERTY ('InstanceName') A: You can get the instance name of your current database as shown below: SELECT @@SERVICENAME -- SQLEXPRESS SELECT SERVERPROPERTY ('InstanceName') -- SQLEXPRESS
{ "language": "en", "url": "https://stackoverflow.com/questions/129861", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36" }
Q: How do I write a generic memoize function? I'm writing a function to find triangle numbers and the natural way to write it is recursively: function triangle (x) if x == 0 then return 0 end return x+triangle(x-1) end But attempting to calculate the first 100,000 triangle numbers fails with a stack overflow after a while. This is an ideal function to memoize, but I want a solution that will memoize any function I pass to it. A: Mathematica has a particularly slick way to do memoization, relying on the fact that hashes and function calls use the same syntax: triangle[0] = 0; triangle[x_] := triangle[x] = x + triangle[x-1] That's it. It works because the rules for pattern-matching function calls are such that it always uses a more specific definition before a more general definition. Of course, as has been pointed out, this example has a closed-form solution: triangle[x_] := x*(x+1)/2. Fibonacci numbers are the classic example of how adding memoization gives a drastic speedup: fib[0] = 1; fib[1] = 1; fib[n_] := fib[n] = fib[n-1] + fib[n-2] Although that too has a closed-form equivalent, albeit messier: http://mathworld.wolfram.com/FibonacciNumber.html I disagree with the person who suggested this was inappropriate for memoization because you could "just use a loop". The point of memoization is that any repeat function calls are O(1) time. That's a lot better than O(n). In fact, you could even concoct a scenario where the memoized implementation has better performance than the closed-form implementation! A: You're also asking the wrong question for your original problem ;) This is a better way for that case: triangle(n) = n * (n - 1) / 2 Furthermore, supposing the formula didn't have such a neat solution, memoisation would still be a poor approach here. You'd be better off just writing a simple loop in this case. See this answer for a fuller discussion. A: I bet something like this should work with variable argument lists in Lua: local function varg_tostring(...) local s = select(1, ...) for n = 2, select('#', ...) do s = s..","..select(n,...) end return s end local function memoize(f) local cache = {} return function (...) local al = varg_tostring(...) if cache[al] then return cache[al] else local y = f(...) cache[al] = y return y end end end You could probably also do something clever with a metatables with __tostring so that the argument list could just be converted with a tostring(). Oh the possibilities. A: In C# 3.0 - for recursive functions, you can do something like: public static class Helpers { public static Func<A, R> Memoize<A, R>(this Func<A, Func<A,R>, R> f) { var map = new Dictionary<A, R>(); Func<A, R> self = null; self = (a) => { R value; if (map.TryGetValue(a, out value)) return value; value = f(a, self); map.Add(a, value); return value; }; return self; } } Then you can create a memoized Fibonacci function like this: var memoized_fib = Helpers.Memoize<int, int>((n,fib) => n > 1 ? fib(n - 1) + fib(n - 2) : n); Console.WriteLine(memoized_fib(40)); A: In Scala (untested): def memoize[A, B](f: (A)=>B) = { var cache = Map[A, B]() { x: A => if (cache contains x) cache(x) else { val back = f(x) cache += (x -> back) back } } } Note that this only works for functions of arity 1, but with currying you could make it work. The more subtle problem is that memoize(f) != memoize(f) for any function f. One very sneaky way to fix this would be something like the following: val correctMem = memoize(memoize _) I don't think that this will compile, but it does illustrate the idea. A: Update: Commenters have pointed out that memoization is a good way to optimize recursion. Admittedly, I hadn't considered this before, since I generally work in a language (C#) where generalized memoization isn't so trivial to build. Take the post below with that grain of salt in mind. I think Luke likely has the most appropriate solution to this problem, but memoization is not generally the solution to any issue of stack overflow. Stack overflow usually is caused by recursion going deeper than the platform can handle. Languages sometimes support "tail recursion", which re-uses the context of the current call, rather than creating a new context for the recursive call. But a lot of mainstream languages/platforms don't support this. C# has no inherent support for tail-recursion, for example. The 64-bit version of the .NET JITter can apply it as an optimization at the IL level, which is all but useless if you need to support 32-bit platforms. If your language doesn't support tail recursion, your best option for avoiding stack overflows is either to convert to an explicit loop (much less elegant, but sometimes necessary), or find a non-iterative algorithm such as Luke provided for this problem. A: function memoize (f) local cache = {} return function (x) if cache[x] then return cache[x] else local y = f(x) cache[x] = y return y end end end triangle = memoize(triangle); Note that to avoid a stack overflow, triangle would still need to be seeded. A: Here's something that works without converting the arguments to strings. The only caveat is that it can't handle a nil argument. But the accepted solution can't distinguish the value nil from the string "nil", so that's probably OK. local function m(f) local t = { } local function mf(x, ...) -- memoized f assert(x ~= nil, 'nil passed to memoized function') if select('#', ...) > 0 then t[x] = t[x] or m(function(...) return f(x, ...) end) return t[x](...) else t[x] = t[x] or f(x) assert(t[x] ~= nil, 'memoized function returns nil') return t[x] end end return mf end A: I've been inspired by this question to implement (yet another) flexible memoize function in Lua. https://github.com/kikito/memoize.lua Main advantages: * *Accepts a variable number of arguments *Doesn't use tostring; instead, it organizes the cache in a tree structure, using the parameters to traverse it. *Works just fine with functions that return multiple values. Pasting the code here as reference: local globalCache = {} local function getFromCache(cache, args) local node = cache for i=1, #args do if not node.children then return {} end node = node.children[args[i]] if not node then return {} end end return node.results end local function insertInCache(cache, args, results) local arg local node = cache for i=1, #args do arg = args[i] node.children = node.children or {} node.children[arg] = node.children[arg] or {} node = node.children[arg] end node.results = results end -- public function local function memoize(f) globalCache[f] = { results = {} } return function (...) local results = getFromCache( globalCache[f], {...} ) if #results == 0 then results = { f(...) } insertInCache(globalCache[f], {...}, results) end return unpack(results) end end return memoize A: Here is a generic C# 3.0 implementation, if it could help : public static class Memoization { public static Func<T, TResult> Memoize<T, TResult>(this Func<T, TResult> function) { var cache = new Dictionary<T, TResult>(); var nullCache = default(TResult); var isNullCacheSet = false; return parameter => { TResult value; if (parameter == null && isNullCacheSet) { return nullCache; } if (parameter == null) { nullCache = function(parameter); isNullCacheSet = true; return nullCache; } if (cache.TryGetValue(parameter, out value)) { return value; } value = function(parameter); cache.Add(parameter, value); return value; }; } } (Quoted from a french blog article) A: In the vein of posting memoization in different languages, i'd like to respond to @onebyone.livejournal.com with a non-language-changing C++ example. First, a memoizer for single arg functions: template <class Result, class Arg, class ResultStore = std::map<Arg, Result> > class memoizer1{ public: template <class F> const Result& operator()(F f, const Arg& a){ typename ResultStore::const_iterator it = memo_.find(a); if(it == memo_.end()) { it = memo_.insert(make_pair(a, f(a))).first; } return it->second; } private: ResultStore memo_; }; Just create an instance of the memoizer, feed it your function and argument. Just make sure not to share the same memo between two different functions (but you can share it between different implementations of the same function). Next, a driver functon, and an implementation. only the driver function need be public int fib(int); // driver int fib_(int); // implementation Implemented: int fib_(int n){ ++total_ops; if(n == 0 || n == 1) return 1; else return fib(n-1) + fib(n-2); } And the driver, to memoize int fib(int n) { static memoizer1<int,int> memo; return memo(fib_, n); } Permalink showing output on codepad.org. Number of calls is measured to verify correctness. (insert unit test here...) This only memoizes one input functions. Generalizing for multiple args or varying arguments left as an exercise for the reader. A: In Perl generic memoization is easy to get. The Memoize module is part of the perl core and is highly reliable, flexible, and easy-to-use. The example from it's manpage: # This is the documentation for Memoize 1.01 use Memoize; memoize('slow_function'); slow_function(arguments); # Is faster than it was before You can add, remove, and customize memoization of functions at run time! You can provide callbacks for custom memento computation. Memoize.pm even has facilities for making the memento cache persistent, so it does not need to be re-filled on each invocation of your program! Here's the documentation: http://perldoc.perl.org/5.8.8/Memoize.html A: Extending the idea, it's also possible to memoize functions with two input parameters: function memoize2 (f) local cache = {} return function (x, y) if cache[x..','..y] then return cache[x..','..y] else local z = f(x,y) cache[x..','..y] = z return z end end end Notice that parameter order matters in the caching algorithm, so if parameter order doesn't matter in the functions to be memoized the odds of getting a cache hit would be increased by sorting the parameters before checking the cache. But it's important to note that some functions can't be profitably memoized. I wrote memoize2 to see if the recursive Euclidean algorithm for finding the greatest common divisor could be sped up. function gcd (a, b) if b == 0 then return a end return gcd(b, a%b) end As it turns out, gcd doesn't respond well to memoization. The calculation it does is far less expensive than the caching algorithm. Ever for large numbers, it terminates fairly quickly. After a while, the cache grows very large. This algorithm is probably as fast as it can be. A: Recursion isn't necessary. The nth triangle number is n(n-1)/2, so... public int triangle(final int n){ return n * (n - 1) / 2; } A: Please don't recurse this. Either use the x*(x+1)/2 formula or simply iterate the values and memoize as you go. int[] memo = new int[n+1]; int sum = 0; for(int i = 0; i <= n; ++i) { sum+=i; memo[i] = sum; } return memo[n];
{ "language": "en", "url": "https://stackoverflow.com/questions/129877", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Easy acceptance testing with specification I look for a tool/framework to make automatic acceptance-testing. The interface to create new tests should be so easy, that a non-programmer (customer, boss) will be able to add specifications for which will be tested automatically. It should be some way to execute the tests from command-line, to include a run of the tests in automatic builds. I prefer Java and Open-Source, but my question isn't restricted in that way. What do you can recommend and please explain why your tool/framework is the best in the world. A: http://fitnesse.org/ appears to meet all of the qualifications you want. It is one I have used with success. A: I think that several of the options are very good and you should test them to see which fits your team : * *Cucumber (Ruby) *Fitnesse *Robot framework (Python/Java) *Behave for Java *SpecFlow (.net) A: I've found a framework named Concordion that may fulfill my needs. A: Another framework you may want to look at is Robot Framework. To see how test cases look like, take a look at the Quick Start Guide. A: What you ask for appears to be for a very well-defined system with a very specific sets of inputs and a high degree of automation built-into the system or developed for your system. Commercial applications such as HP Quick Test Pro isn't non-technical enough and requires an additional framework such as one from Sonnet, which is a step in the right direction, but neither is open source or java-based. Even with a framework in place, it's quite a bit of work to make this work in an automated way. I'd like you to consider the time needed to develop the framework vs the time to manually run these tests and verify that you are using your time well. A: How about Cucumber: Feature: Acceptance testing framework Scenario: an example speaks volumes Given a text example When it is read Then the simplicity will be appreciated You would need a developer to discuss with the boss what each of those lines really means and implement the step definition to drive it: Given /^a text example$/ do file.open("example.txt", "w") { |file| file.write "text example" } end When /^it is read$/ do SystemUnderTest.read("example.txt") end Then /^the simplicity will be appreciated$/ do SystemUnderTest.simplicity.should be_appreciated end
{ "language": "en", "url": "https://stackoverflow.com/questions/129884", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Pass NSMutableArray object I'm getting lost in pointer land, I believe. I've got this (code syntax might be a little off, I am not looking at the machine with this code on it...but all the pertinent details are correct): NSMutableArray *tmp = [[NSMutableArray alloc] init]; I them pass that to a routine in another class - (BOOL)myRoutine: (NSMutableArray *)inArray { // Adds items to the array -- if I break at the end of this function, the inArray variable has a count of 10 } But when the code comes back into the calling routine, [tmp count] is 0. I must be missing something very simple and yet very fundamental, but for the life of me I can't see it. Can anyone point out what I'm doing wrong? EDIT: www.stray-bits.com asked if I have retained a reference to it, and I said "maybe...we tried this: NSMutableArray *tmp = [[[NSMutableArray alloc] init] retain]; not sure if that is what you mean, or if I did it right. EDIT2: Mike McMaster and Andy -- you guys are probably right, then. I don't have the code here (it's on a colleague's machine and they have left for the day), but to fill the array with values we were doing something along the lines of using a decoder(?) object. The purpose of this function is to open a file from the iPhone, read that file into an array (it's an array of objects that we saved in a previous run of the program). That "decoder" thing has a method that puts data into the array. Man, I've totally butchered this. I really hope you all can follow, and thanks for the advice. We'll look more closely at it. A: You don't need to call retain in this case. [[NSMutableArray alloc] init] creates the object with a retain count of 1, so it won't get released until you specifically release it. It would be good to see more of the code. I don't think the error is in the very small amount you've posted so far.. A: I agree with Mike - based on the code you've posted, it looks correct. In addition to posting the code used to call the function and add items to the array, you could try checking the memory addresses of the pointer at the end of the function (when it has all of the objects), and also once it has returned (when it has no objects). I'm not sure why it would be different, but then again the items should stick in the array as well. A: You need to show us a bit more of how you're adding objects to the array for us to really help. I've seen a lot of people write code like this: NSMutableArray *array = [[NSMutableArray alloc] initWithCapacity:0]; array = [foo bar]; People doing this think it "creates and then sets" a mutable array, but that's not at all what it does. Instead, it creates a mutable array, assigns it to the variable named array, and then assigns a different mutable array to that variable. So be sure you're not confusing the variable for the object to which it is a reference. The object isn't the variable, it's interacted with through the variable. A: NSMutableArray retains objects added to it, but have you retained the array itself? A: The code you posted should work. You must be doing something funny in the decoder function. You should not retain that array. It's automatically retained with init. If you retain it, you'll leak memory. If you are just starting with objective c, take time and read "Introduction to Memory Management Programming Guide for Cocoa". It will spare you lots of headache. Why are you writing so much code to read an array from a file? It's already supported by the framework: + arrayWithContentsOfFile: Returns an array initialized from the contents of a specified file. The specified file can be a full or relative pathname; the file that it names must contain a string representation of an array, such as that produced by the writeToFile:atomically: method. So you can do this: NSMuatableArray *myArray = [NSMutableArray arrayWithContentsOfFile:@"path/to/my/file"]; This is a convenience method, so the object will autorelease. Make sure to retain this one if you want to keep it around.
{ "language": "en", "url": "https://stackoverflow.com/questions/129890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Javascript and session variables I have a database that stores events in it and a page with a calendar object on it. When rendering the days it looks through the months events and if any match the current day being rendered it creates a linkbutton to represent the event in the day on the calendar and adds it to that cell. I add some javascript to the linkbutton to change the window.location to a page to view event details passing EventID in the querystring ( I tried setting the postbackurl of the newly created linkbutton but it wasnt causing a postback... no luck). I need to set a Session variable ie. Session("EditMode") = "Edit" So the new page will know it is to get an existing event info rather than prepare to create a new event? Any SUGGESTIONS? A: Your session vars are controlled by the server, JS runs client side, and as such cannot modify the vars directly. You need to make server requests using POST or GET and hidden iframes, or XMLHTTPRequest() calls to send data from the JS to the server, and then have your server side code handle the vars. Add another query string variable that the page can use to trigger existing vs new. A: Add another query string variable that the page can use to trigger existing vs new. A: If you are using something like Struts2, you can have a hidden variable in your jsp <s:hidden id="EditModeId" value="%{#session.EditMode}"/> And within javascript simply access this variable alert(document.getElementById('EditModeId').value); A: You definitely need to add a variable to the target page. But I take it that you are doing a popup scenario, so you should be able to create a javascript function OpenWindow() and fire it off when the user clicks the link. <script> function OpenWindow(eventId, editMode) { var window = window.open("popup.aspx?eventId=" + eventId + "&editMode=" + editMode); } </script> On the server side you need to build the call to the OpenWindow function. For example: onclick="OpenWindow(eventId=" + row["eventId"].ToString() + "&editMode=" + editMode.ToString() + ");" So in other words, prep everything on the serverside to set your javascript to post all variables to the new page. Hope this helps. A: var page1 = document.getElementById("textbox").value; sessionStorage.setItem("page1content", page1); in other page use this value as like session variable document.getElementById("textbox2").value=sessionStorage.getItem("page1content");
{ "language": "en", "url": "https://stackoverflow.com/questions/129898", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How can adding data to a segment in flash memory screw up a program's timing? I have a real-time embedded app with the major cycle running at 10KHz. It runs on a TI TMS320C configured to boot from flash. I recently added an initialized array to a source file, and all of a sudden the timing is screwed up (in a way too complex to explain well - essentially a serial port write is no longer completing on time.) The things about this that baffle me: * *I'm not even accessing the new data, just declaring an initialized array. *It is size dependant - the problem only appears if the array is >40 words. *I know I'm not overflowing any data segments in the link map. *There is no data caching, so it's not due to disrupting cache consistency. Any ideas on how simply increasing the size of the .cinit segment in flash can affect the timing of your code? Additional Info: I considered that maybe the code had moved, but it is well-separated from the data. I verified through the memory map that all the code segements have the same addresses before and after the bug. I also verified that none of the segments are full - The only addresses that change in the map are a handful in the .cinit section. That section contains data values used to initialize variables in ram (like my array). It shouldn't ever be accessed after main() is called. A: My suspicions would point to a change in alignment between your data/code and the underlying media/memory. Adding to your data would change the locations of memory in your heap (depending on the memory model) and might put your code across a 'page' boundary on the flash device causing latency which was not there before. A: Perhaps the new statically allocated array pushes existing data into slower memory regions, causing accesses to that data to be slower? A: Does the problem recur if the array is the last thing in its chunk of address space? If not, looking in your map, try moving the array declaration so that, one by one, things placed after it are shuffled to be before it instead. This way you can pinpoint the relevant object and begin to work out why moving it causes the delay. A: I would risk my self and claim that you don't have a performance problem here, rather some kind of memory corruption that symptoms as a performance problem. Adding an array to your executable changing the memory picture. So my guess will be that you have a memory corruption that mostly harmless (i.e overwriting not used part of the memory) and shifting your memory more than 40 bytes cause the memory corruption to make a bigger problem. Which one is a real questions A: After more than a day staring at traces and generated assembly, I think I figured it out. The root cause problem turned out to be an design issue that caused glitches only if the ISR that kicked off the serial port write collided with a higher priority one. The timing just happened to work out that it only took adding a few extra instructions to one loop to cause the two interrupts to collide. So the question becomes: How does storing, but not accessing, additional data in flash memory cause additional instructions to be executed? It appears that the answer is related to, but not quite the same as, the suggestions by Frosty and Frederico. The new array does move some existing variables, but not across page boundaries or to slower regions (on this board, access times should be the same for all regions) . But it does change the offsets of some frequently accessed structures, which causes the optimizer to issue slightly different instruction sequences for accessing them. One data alignment may cause a one cycle pipeline stall, where the other does not. And those few instructions shifted timing enough enough to expose the underlying problem. A: Could the initialization be overwriting another adjacent piece of code? Are there any structs or variables that use the array, that are now bigger and could cause a stackoverflow? A: Could be a bank or page conflict in as well. Maybe you have two routines that are called quite often (interrupt handlers or so) that have been at the same page and are now split up in two pages.
{ "language": "en", "url": "https://stackoverflow.com/questions/129911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Getting Python to use the ActiveTcl libraries Is there any way to get Python to use my ActiveTcl installation instead of having to copy the ActiveTcl libraries into the Python/tcl directory? A: Not familiar with ActiveTcl, but in general here is how to get a package/module to be loaded when that name already exists in the standard library: import sys dir_name="/usr/lib/mydir" sys.path.insert(0,dir_name) Substitute the value for dir_name with the path to the directory containing your package/module, and run the above code before anything is imported. This is often done through a 'sitecustomize.py' file so that it will take effect as soon as the interpreter starts up so you won't need to worry about import ordering.
{ "language": "en", "url": "https://stackoverflow.com/questions/129912", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What property in Netbeans to I need to change to set the name of my java swing app in the OS X menubar and dock? I found info.plist, but changing @PROJECTNAMEASIDENTIFIEER@ in multiple keys here had no effect. Thanks, hating netbeans. A: Check: nbproject/project.properties nbproject/project.xml in project.xml look for the name element... But... Why not just select the main project and right click and do rename? A: The answer depends on how you run your application. If you run it from the command line, use '-Xdock:name=appname' in the JVM arguments. See the section "More tinkering with the menu bar" in the article linked to by Dan Dyer. If you are making a bundled, double-clickable application, however, you just need to set the standard CFBundle-related keys in your application's Info.plist (see the documentation on Info.plist keys for more details). A: This is not NetBeans-specific, but this article has some useful tips about tweaking your Swing apps so that they fit in on OS X.
{ "language": "en", "url": "https://stackoverflow.com/questions/129915", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What is the best way to launch a web browser with a custom url from a C# application? It's common knowledge that using System.Diagnostics.Process.Start is the way to launch a url from a C# applicaiton: System.Diagnostics.Process.Start("http://www.mywebsite.com"); However, if this url is invalid the application seems to have no way of knowing that the call failed or why. Is there a better way to launch a web browser? If not, what is my best option for url validation? A: Try an approach as below. try { var url = new Uri("http://www.example.com/"); Process.Start(url.AbsoluteUri); } catch (UriFormatException) { // URL is not parsable } This does not ensure that the resource exist, but it does ensure the URL is wellformed. You might also want to check if the scheme is matching http or https. A: If you need to verify that the URL exists, the only thing you can do is create a custom request in advance and verify that it works. I'd still use the Process.Start to shell out to the actual page, though. A: Check the Uri.IsWellFormedUriString static method. It's cheaper than catching exception.
{ "language": "en", "url": "https://stackoverflow.com/questions/129917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Populating a database with file names from directories I have an application which behaves as a slideshow for all pictures in a folder. It is written in Borland's C++ Builder (9). It currently uses some borrowed code to throw the filenames into a listbox and save the listbox items as a text file. I want to update this so that the filenames are stored in a proper database so that I can include extra fields and do proper SQL things with it. So basically I would be able to work it out if I saw some 'sample' code doing the same thing. So if anyone knows of any code that does this I would be greatful. It needs to be able to do it on certain file types... not just all the files. A: You basically neeed to write a recursive function with a TDataSet parameter. (I could not compile my code, so you get it "as is") void AddFiles(AnsiString path, TDataSet *DataSet) { TSearchRec sr; int f; f = FindFirst(path+"\\*.*", faAnyFile, sr); while( !f ) { if(sr.Attr & faDirectory) { if(sr.Name != "." && sr.Name != "..") { path.sprintf("%s%s%s", path, "\\", sr.Name); AddFiles(path, DataSet); } } else { DataSet->Append(); DataSet->FieldByName("Name")->Value = sr.Name; /* other fields ... */ DataSet->Post(); } f = FindNext(sr); } FindClose(sr); }
{ "language": "en", "url": "https://stackoverflow.com/questions/129919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do you sign your Firefox extensions? I have developed a couple of extensions for Firefox, and am annoyed that it is so hard to get the extension signed. When an extension isn't signed, it says "Author not verified" when it is installed, and to me that just looks wrong. I have a simple build script that builds my .xpi file from sources, and I have a licenced copy of PKZip (which according to a number of tutorials is required to build a signed xpi file that Firefox requires), but I haven't found a way to get a free/cheap certificate that actually works or a set of instructions that do the trick. Since my extensions are free, I don't want to spend $400 on a commercial certificate, but I don't mind spending $50 or so to get it done. I have both Linux and Windows machines, although my build script currently uses Windows and that would be most convenient to use. How have you solved this? What do I need to do to automatically and securely sign my extensions when they are built? Edit: I appreciate the Google hits, but the steps they provide aren't complete enough on how to actually get a certificate that works. The feeling I get reminds me of this classic: A: Avoid the GoDaddy codesigning certs as the necessary intermediate CA certificate isn't in Firefox by default. C=US,ST=Arizona,L=Scottsdale,O=GoDaddy.com\,Inc.,OU=http://certificates.godaddy.com/repository,CN=Go Daddy Secure Certification Authority,SERIALNUMBER=07969287' If you sign with it your users will get signing errors with it. e.g. SIgning could not be verified. -260 A: What I found with Google was this: http://www.mercille.org/snippets/xpiSigning.php which states: If you don't want a commercial certificate or can't afford one, Ascertia can provide you with a free certificate, but turning it into a code signing certificate requires some extra work, which I have detailed on another page. I can't say that I've tried it. And on http://developer.mozilla.org/en/Signing_a_XPI it says: The cheapest universally supported (Mozilla, Java, Microsoft) certificate seems to be the Comodo Instant-SSL offering. You can get a free certificate for open-source developers from Unizeto Certum, but their root certificate is only present in Mozilla Firefox and Opera (not Java or Microsoft). A: I've used the comodo certificate to sign XPIs. It was the cheapest option at the time. I've written a few posts on the XPI Format and a howto for signing using a java commandline tool. My tool XPISigner simplifies the process considerably and is integratable into build systems. I've removed the tool as it no longer works with FF4 or higher. Source is available on http://code.google.com/p/xpisigner/ if anyone feels like fixing. A: Yes, XPI signing is unfortunately quite untrivial. I would advise searching/posting to the mozilla newsgroups (dev-extensions, project owners @ mozdev, irc.mozilla.org) and also trying to get in touch with the people who got it to work. A: Tucows sells Comodo code signing certificates for $75 per year, that's as cheap as it goes from what I can tell (https://author.tucows.com/, "Code Signing Certificates" section). That's still too much money for me to spend so I didn't try how it works. Not that I can try, from what I can tell you need to be a registered organization to buy a Comodo certificate. As to Ascertia, getting a certificate is easy enough (http://www.ascertia.com/onlineCA/Issuer/CerIssue.aspx) - but such a certificate is as worthless as a self-issued certificate because you would need to import their root certificate before seeing an effect. A: If you have an Open Source project, you can get a free code signing certificate from Unizeto. The steps to get the certificate itself are described in detail here. Once you have the certificate, do the following: * *get the private key from your browser (e.g. download it as .p12 from your keychain - do not set a password) and convert it into PEM format via openssl pkcs12 -in key.p12 -nodes -out private.key -nocerts *Open your .pem file that you downloaded from Unicert, add your private key beneath it, and the Public Key of Certum Level III CA from here beneath the private key, so it looks like this: -----BEGIN CERTIFICATE----- [your certificate from Certum] -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- [the private key you just converted from the .p12 file from your keychain] -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- [the Certum Level III CA public key you just downloaded] -----END CERTIFICATE----- *Save this file as cert_with_key_and_ca.pem *Install xpisign.py with pip install https://github.com/nmaier/xpisign.py/zipball/master *Run xpisign -k cert_with_key_and_ca.pem unsigned.xpi signed.xpi *Drag & Drop the signed.xpi into Firefox and you should see the author name where before there was a (Author not verified) message next to the extension name.
{ "language": "en", "url": "https://stackoverflow.com/questions/129920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: What is MVC (Model View Controller)? I've heard the term MVC (Model View Controller) tossed about with a ton of Buzz lately, but what really is it? A: MVC is a design pattern originally pioneered in the olden days of smalltalk. The concept was that a model would represent your application state and logic, and controllers would handle IO between "Views". A View was a representation of the state in the model. For example, your model may be a spreadsheet document, and you may have a view that represents it as a spreadsheet and a view that represents it as a pivot table. Modern MVC has been polluted with fake MVC web junk, so I'll let others answer that. A: Here is a naive description of MVC : http://www.devcodenote.com/2015/04/mvc-model-view-controller.html A snippet: Definition : It is a design pattern which separates an application into multiple layers of functionality. The layers: Model Represents data. It acts as an interface between the database and the application (as a data object). It will handle validations, associations, transactions etc. Controller It gathers and processes data. Handles code which does data selection and data messaging. View Displays output to the users. A: MVC Design Pattern: 4 parts = User, View, Controller, Model. User: - sees the View and uses the Controller. Model: - holds the data and updates the Model that there is new data/state. View: - displays the data that the Model has. Controller: - takes the request from the user to get or set information, then communicates with either the View or Model, resp. - it "gets" via the View. - it "sets" via the Model. A: You might want to take a look at what Martin Fowler has to say about MVC, MVP and UI architectures in general at Martin Fowlers site. A: As the tag on your question states its a design pattern. But that probably doesn't help you. Basically what it is, is a way to organize your code into logical groupings that keep the various pieces separate and easily modifiable. Simplification: Model = Data structure / Business Logic View = Output layer (i.e HTML code) Controller = Message transfer layer So when people talk about MVC what they are talking about is dividing up there code into these logical groups to keep it clean and structured, and hopefully loosely coupled. By following this design pattern you should be able to build applications that could have there View completely changed into something else without ever having to touch your controller or model (i.e. switching from HTML to RSS). There are tons and tons of tutorials out there just google for it and I'm sure you'll turn up at least one that will explain it in terms that click with you. A: I like this article by Martin Fowler. You'll see that MVC is actually more or less dead, strictly speaking, in its original domain of rich UI programming. The distinction between View and Controller doesn't apply to most modern UI toolkits. The term seems to have found new life in web programming circles recently. I'm not sure whether that's truly MVC though, or just re-using the name for some closely related but subtly different ideas. A: Wikipedia seems to describe it best so far: http://en.wikipedia.org/wiki/Model-view-controller Model-view-controller (MVC) is an architectural pattern used in software engineering. Successful use of the pattern isolates business logic from user interface considerations, resulting in an application where it is easier to modify either the visual appearance of the application or the underlying business rules without affecting the other. In MVC, the model represents the information (the data) of the application and the business rules used to manipulate the data; the view corresponds to elements of the user interface such as text, checkbox items, and so forth; and the controller manages details involving the communication to the model of user actions such as keystrokes and mouse movements A: The MVC or Model-View-Controller User Interface Paradigm was first described by Trygve Reenskaug of the Xerox PARC. In first appeared in print in Byte magazine volume 6, number 8, in August of 1981. A: This What is MVC blog article on Oreilly has you covered. A: MVC is a software architecture pattern that separates representation from user interaction. Generally, the model consists of application data and functions that interact with it, while the view presents this data to the user; the controller mediates between the two. A: MVC is a way to partition a user interface element into 3 distinct concepts. The model is the data on which the interface operates. The view is how the element is represented visually (or maybe audibly?). The controller is the logic that operates on the data. For example, if you have some text you want to manipulate in a UI. A simple string could represent the data. The view could be a text field. The controller is the logic that translates input from the user - say character or mouse input - and makes changes to the underlying data model. A: Like many have said already, MVC is a design pattern. I'm teaching one of my coworkers now and have explained it this way: Models - The data access layer. This can be direct data access, web services, etc Views - The presentation layer of your application. Controllers - This is the business logic for your application. This pattern enhances test-driven development. A: It is a way of separating the underlying functionality of your application (model) from the way it interacts with the user (view). The controller coordinates how the model and view talk to each other. Whilst it is all the rage at the moment, it is important to remember that preventing the model itself being able to determine exactly how its data is presented to the user can seen as a negative thing. The most obvious example is with HTML. The original intention of HTML was that there should be a clear separation of the model (HTML) from the view (rendered webpage) via a controller (the browser). There has been such a backlash against this original intention that browsers are criticised if they do not render a page pixel perfect to the designer's desired view.
{ "language": "en", "url": "https://stackoverflow.com/questions/129921", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: What's the best way to generate a Text file in a .net website? I have a page in my vb.net web application that needs to toss a bunch of data into a text file and then present it to the user for download. What's the best / most efficient way to build such a text file on a .net web server? Edit: to answer a question down below, this is going to be a download once and then throw-away kind of file. Update: I glued together the suggestions by John Rudy and DavidK, and it worked perfectly. Thanks, all! A: Use a StringBuilder to create the text of the file, and then send it to the user using Content-Disposition. Example found here: http://www.eggheadcafe.com/community/aspnet/17/76432/use-the-contentdispositi.aspx private void Button1_Click(object sender, System.EventArgs e) { StringBuilder output = new StringBuilder; //populate output with the string content String fileName = "textfile.txt"; Response.ContentType = "application/octet-stream"; Response.AddHeader("Content-Disposition", "attachment; filename=" + fileName); Response.WriteFile(output.ToString()); } A: Don't build it at all, use an HttpHandler and serve the text file direct into the output stream: http://digitalcolony.com/labels/HttpHandler.aspx The code block halfway down is a good example, you can adjust to your own: public void ProcessRequest(HttpContext context) { response = context.Response; response.ContentType = "text/xml"; using (TextWriter textWriter = new StreamWriter(response.OutputStream, System.Text.Encoding.UTF8)) { XmlTextWriter writer = new XmlTextWriter(textWriter); writer.Formatting = Formatting.Indented; writer.WriteStartDocument(); writer.WriteStartElement("urlset"); writer.WriteAttributeString("xmlns:xsi", "http://www.w3.org/2001/XMLSchema-instance"); writer.WriteAttributeString("xsi:schemaLocation", "http://www.sitemaps.org/schemas/sitemap/0.9 http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd"); writer.WriteAttributeString("xmlns", "http://www.sitemaps.org/schemas/sitemap/0.9"); // Add Home Page writer.WriteStartElement("url"); writer.WriteElementString("loc", "http://example.com"); writer.WriteElementString("changefreq", "daily"); writer.WriteEndElement(); // url // Add code Loop here for page nodes /* { writer.WriteStartElement("url"); writer.WriteElementString("loc", url); writer.WriteElementString("changefreq", "monthly"); writer.WriteEndElement(); // url } */ writer.WriteEndElement(); // urlset } } A: The answer will depend on whether, as Forgotten Semicolon mentions, you need repeated downloads or once-and-done throwaways. Either way, the key will be to set the content-type of the output to ensure that a download window is displayed. The problem with straight text output is that the browser will attempt to display the data in its own window. The core way to set the content type would be something similar to the following, assuming that text is the output string and filename is the default name you want the file to be saved (locally) as. HttpResponse response = HttpContext.Current.Response; response.Clear(); response.ContentType = "application/octet-stream"; response.Charset = ""; response.AddHeader("Content-Disposition", String.Format("attachment; filename=\"{0}\"", filename)); response.Flush(); response.Write(text); response.End(); This will prompt a download for the user. Now it gets trickier if you need to literally save the file on your web server -- but not terribly so. There you'd want to write out the text to your text file using the classes in System.IO. Ensure that the path you write to is writable by the Network Service, IUSR_MachineName and ASPNET Windows users. Otherwise, same deal -- use content type and headers to ensure download. I'd recommend not literally saving the file unless you need to -- and even then, the technique of doing so directly on the server may not be the right idea. (EG, what if you need access control for downloading said file? Now you'd have to do that outside your app root, which may or may not even be possible depending on your hosting environment.) So without knowing whether you're in a one-off or file-must-really-save mode, and without knowing security implications (which you'll probably need to work out yourself if you really need server-side saves), that's about the best I can give you. A: Bear in mind it doesn't ever need to be a 'file' at the server end. It's the client which turns it into a file.
{ "language": "en", "url": "https://stackoverflow.com/questions/129927", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Can a standalone .EXE be created from a coded WebTest in Visual Studio Team Studio I am running VS Team Studio 2008. I have created a web test that I want to use for monitoring a company web site. It interacts with the site and does some round trip processing. I want to create a standalone EXE file that can be run remotely. I have tried converting it to VB code and C# code and then creating compiling it into an EXE. But, when running it, no traffic is generated from the host to the webserver. Has anyone tried to do this before successfully? I tried this in VB. Option Strict Off Option Explicit On Imports Microsoft.VisualStudio.TestTools.WebTesting Imports Microsoft.VisualStudio.TestTools.WebTesting.Rules Imports System Imports System.Collections.Generic Imports System.Text Public Module RunMonitor Sub Main() Dim S As Monitor.MonitorCoded = New Monitor.MonitorCoded() S.Run() End Sub End Module Namespace TheMonitor Public Class MonitorCoded Inherits ThreadedWebTest Public Sub New() MyBase.New() Me.PreAuthenticate = True End Sub Public Overrides Sub Run() 'WebRequest code is here' End Sub End Class End Namespace Any suggestions appreciated. A: Daniel, I created most of the classes in the Microsoft.VisualStudio.TestTools.WebTesting namespace and I can assure you it's NOT possible to run a coded web test without Visual Studio or MSTest.exe. Coded web tests basically hand WebTestRequests back to the web test engine, they don't start the web test engine themselves. We weren't trying to prevent the use case you described, but it just wasn't a design goal. Josh A: Can you call MSTest.exe? If your test was created using VisualStudio, it uses MSTest to execute it. If you didn't use VisualStudio to create the webTest, can you provide a little more detail?
{ "language": "en", "url": "https://stackoverflow.com/questions/129932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What does "0 but true" mean in Perl? Can someone explain what exactly the string "0 but true" means in Perl? As far as I understand, it equals zero in an integer comparison, but evaluates to true when used as a boolean. Is this correct? Is this a normal behavior of the language or is this a special string treated as a special case in the interpreter? A: 0 means false in Perl (and other languages related to C). For the most part, that's a reasonable behavior. Other languages (Lua for instance) treat 0 as true and provide another token (often nil or false) to represent a non-true value. One case where the Perl way doesn't work so well is when you want to return either a number or, if the function fails for some reason, a false value. For instance, if you write a function that reads a line from a file and returns the number of characters on the line. A common usage of the function might be something like: while($c = characters_in_line($file)){ ... }; Notice that if the number of characters on a particular line is 0, the while loop will end before the end of the file. So the characters_in_line function should special case 0 characters and return '0 but true' instead. That way the function will work as intended in the while loop, but also return the correct answer should it be used as a number. Note that this isn't a built in part of the language. Rather it takes advantage of Perl's ability to interpret a string as a number. So other stings are sometimes used instead. DBI uses "0E0", for instance. When evaluated in numeric context, they return 0, but in boolean context, false. A: Things that are false: * *"". *"0". *Things that stringify to those. "0 but true" is not one of those, so it's not false. Furthermore, Perl returns "0 but true" where a number is expected in order to signal that a function succeeded even though it returned zero. sysseek is an example of such a function. Since the value is expected to be used as a number, Perl is coded to consider it to be a number. As a result, no warnings are issued when it's used as a number, and looks_like_number("0 but true") returns true. Other "true zeroes" can be found at http://www.perlmonks.org/?node_id=464548. A: It's normal behaviour of the language. Quoting the perlsyn manpage: The number 0, the strings '0' and "", the empty list (), and undef are all false in a boolean context. All other values are true. Negation of a true value by ! or not returns a special false value. When evaluated as a string it is treated as "", but as a number, it is treated as 0. Because of this, there needs to be a way to return 0 from a system call that expects to return 0 as a (successful) return value, and leave a way to signal a failure case by actually returning a false value. "0 but true" serves that purpose. A: Because it's hardcoded in the Perl core to treat it as a number. This is a hack to make Perl's conventions and ioctl's conventions play together; from perldoc -f ioctl: The return value of ioctl (and fcntl) is as follows: if OS returns: then Perl returns: -1 undefined value 0 string "0 but true" anything else that number Thus Perl returns true on success and false on failure, yet you can still easily determine the actual value returned by the operating system: $retval = ioctl(...) || -1; printf "System returned %d\n", $retval; The special string "0 but true" is exempt from -w complaints about improper numeric conversions. A: Additionally to what others said, "0 but true" is special-cased in that it doesn't warn in numeric context: $ perl -wle 'print "0 but true" + 3' 3 $ perl -wle 'print "0 but crazy" + 3' Argument "0 but crazy" isn't numeric in addition (+) at -e line 1. 3 A: Another example of "0 but true": The DBI module uses "0E0" as a return value for UPDATE or DELETE queries that didn't affect any records. It evaluates to true in a boolean context (indicating that the query was executed properly) and to 0 in a numeric context indicating that no records were changed by the query. A: I just found proof that the string "0 but true" is actially built into the interpreter, like some people here already answered: $ strings /usr/lib/perl5/5.10.0/linux/CORE/libperl.so | grep -i true Perl_sv_true %-p did not return a true value 0 but true 0 but true A: The value 0 but true is a special case in Perl. Although to your mere mortal eyes, it doesn't look like a number, wise and all knowing Perl understands it really is a number. It has to do with the fact that when a Perl subroutine returns a 0 value, it is assumed that the routine failed or returned a false value. Imagine I have a subroutine that returns the sum of two numbers: die "You can only add two numbers\n" if (not add(3, -2)); die "You can only add two numbers\n" if (not add("cow", "dog")); die "You can only add two numbers\n" if (not add(3, -3)); The first statement won't die because the subroutine will return a 1. That's good. The second statement will die because the subroutine won't be able to add cow to dog. And, the third statement? Hmmm, I can add 3 to -3. I just get 0, but then my program will die even though the add subroutine worked! To get around this, Perl considers 0 but true to be a number. If my add subroutine returns not merely 0, but 0 but true, my third statement will work. But is 0 but true a numeric zero? Try these: my $value = "0 but true"; print qq(Add 1,000,000 to it: ) . (1_000_000 + $value) . "\n"; print "Multiply it by 1,000,000: " . 1_000_000 * $value . "\n"; Yup, it's zero! The index subroutine is a very old piece of Perl and existed before the concept of 0 but true was around. It is suppose to return the position of the substring located in the string: index("barfoo", "foo"); #This returns 3 index("barfoo", "bar"); #This returns 0 index("barfoo", "fu"); #This returns ...uh... The last statment returns a -1. Which means if I did this: if ($position = index($string, $substring)) { print "It worked!\n"; } else { print "If failed!\n"; } As I normally do with standard functions, it wouldn't work. If I used "barfoo" and "bar" like I did in the second statement, The else clause would execute, but if I used "barfoo" and "fu" as in the third, the if clause would execute. Not what I want. However, if the index subroutine returned 0 but true for the second statement and undef for the third statement, my if/else clause would have worked. A: When you want to write a function that returns either an integer value, or false or undef (i.e. for the error case) then you have to watch out for the value zero. Returning it is false and shouldn't indicate the error condition, so returning "0 but true" makes the function return value true while still passing back the value zero when math is done on it. A: "0 but true" is a string just like any other but because of perl's syntax it can serve a useful purpose, namely returning integer zero from a function without the result being "false"(in perl's eyes). And the string need not be "0 but true". "0 but false" is still "true"in the boolean sense. consider: if(x) for x: yields: 1 -> true 0 -> false -1 -> true "true" -> true "false" -> true "0 but true" -> true int("0 but true") ->false The upshot of all of this is you can have: sub find_x() and have this code be able to print "0" as its output: if($x = find_x) { print int($x) . "\n"; } A: You may also see the string "0E0" used in Perl code, and it means the same thing, where 0E0 just means 0 written in exponential notation. However, since Perl only considers "0", '' or undef as false, it evaluates to true in a boolean context. A: The string ``0 but true'' is still a special case: for arg in "'0 but true'" "1.0*('0 but true')" \ "1.0*('0 but false')" 0 1 "''" "0.0" \ "'false'" "'Ja'" "'Nein'" "'Oui'" \ "'Non'" "'Yes'" "'No'" ;do printf "%-32s: %s\n" "$arg" "$( perl -we ' my $ans=eval $ARGV[0]; $ans=~s/^(Non?|Nein)$//; if ($ans) { printf "true: |%s|\n",$ans } else { printf "false: |%s|", $ans };' "$arg" )" done give the following: (note the ``warning''!) '0 but true' : true: |0 but true| 1.0*('0 but true') : false: |0| Argument "0 but false" isn't numeric in multiplication (*) at (eval 1) line 1. 1.0*('0 but false') : false: |0| 0 : false: |0| 1 : true: |1| '' : false: || 0.0 : false: |0| 'false' : true: |false| 'Ja' : true: |Ja| 'Nein' : false: || 'Oui' : true: |Oui| 'Non' : false: || 'Yes' : true: |Yes| 'No' : false: || ... and don't forget to RTFM! man -P'less +"/0 but [a-z]*"' perlfunc ... "fcntl". Like "ioctl", it maps a 0 return from the system call into "0 but true" in Perl. This string is true in boolean context and 0 in numeric context. It is also exempt from the normal -w warnings on improper numeric conversions. ... A: It's hard-coded in Perl's source code, specifically in Perl_grok_number_flags in numeric.c. Reading that code I discovered that the string "infinity" (case insensitive) passes the looks_like_number test too. I hadn't known that. A: In an integer context, it evaluates to 0 (the numeric part at the beginning of the string) and is zero. In a scalar context, it's a non-empty value, so it is true. * *if (int("0 but true")) { print "zero"; } (no output) *if ("0 but true") { print "true"; } (prints true)
{ "language": "en", "url": "https://stackoverflow.com/questions/129945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "62" }
Q: What's the best way to keep multiple Linux servers synced? I have several different locations in a fairly wide area, each with a Linux server storing company data. This data changes every day in different ways at each different location. I need a way to keep this data up-to-date and synced between all these locations. For example: In one location someone places a set of images on their local server. In another location, someone else places a group of documents on their local server. A third location adds a handful of both images and documents to their server. In two other locations, no changes are made to their local servers at all. By the next morning, I need the servers at all five locations to have all those images and documents. My first instinct is to use rsync and a cron job to do the syncing over night (1 a.m. to 6 a.m. or so), when none of the bandwidth at our locations is being used. It seems to me that it would work best to have one server be the "central" server, pulling in all the files from the other servers first. Then it would push those changes back out to each remote server? Or is there another, better way to perform this function? A: The way I do it (on Debian/Ubuntu boxes): * *Use dpkg --get-selections to get your installed packages *Use dpkg --set-selections to install those packages from the list created *Use a source control solution to manage the configuration files. I use git in a centralized fashion, but subversion could be used just as easily. A: AFAIK, rsync is your best choice, it supports partial file updates among a variety of other features. Once setup it is very reliable. You can even setup the cron with timestamped log files to track what is updated in each run. A: An alternative if rsync isn't the best solution for you is Unison. Unison works under Windows and it has some features for handling when there are changes on both sides (not necessarily needing to pick one server as the primary, as you've suggested). Depending on how complex the task is, either may work. A: One thing you could (theoretically) do is create a script using Python or something and the inotify kernel feature (through the pyinotify package, for example). You can run the script, which registers to receive events on certain trees. Your script could then watch directories, and then update all the other servers as things change on each one. For example, if someone uploads spreadsheet.doc to the server, the script sees it instantly; if the document doesn't get modified or deleted within, say, 5 minutes, the script could copy it to the other servers (e.g. through rsync) A system like this could theoretically implement a sort of limited 'filesystem replication' from one machine to another. Kind of a neat idea, but you'd probably have to code it yourself. A: I don't know how practical this is, but a source control system might work here. At some point (perhaps each hour?) during the day, a cron job runs a commit, and overnight, each machine runs a checkout. You could run into issues with a long commit not being done when a checkout needs to run, and essentially the same thing could be done rsync. I guess what I'm thinking is that a central server would make your sync operation easier - conflicts can be handled once on central, then pushed out to the other machines. A: rsync would be your best choice. But you need to carefully consider how you are going to resolve conflicts between updates to the same data on different sites. If site-1 has updated 'customers.doc' and site-2 has a different update to the same file, how are you going to resolve it? A: I have to agree with Matt McMinn, especially since it's company data, I'd use source control, and depending on the rate of change, run it more often. I think the central clearinghouse is a good idea. A: Depends upon following * How many servers/computers that need to be synced ? ** If there are too many servers using rsync becomes a problem ** Either you use threads and sync to multiple servers at same time or one after the other. So you are looking at high load on source machine or in-consistent data on servers( in a cluster ) at given point of time in the latter case * *Size of the folders that needs to be synced and how often it changes * *If the data is huge then rsync will take time. *Number of files * *If number of files are large and specially if they are small files rsync will again take a lot of time So all depends on the scenario whether to use rsync , NFS , Version control * *If there are less servers and just small amount of data , then it makes sense to run rysnc every hour. You can also package content into RPM if data changes occasionally With the information provided , IMO Version Control will suit you the best . Rsync/scp might give problems if two people upload different files with same name . NFS over multiple locations needs to be architect-ed with perfection Why not have a single/multiple repositories and every one just commits to those repository . All you need to do is keep the repository in sync. If the data is huge and updates are frequent then your repository server will need good amount of RAM and good I/O subsystem
{ "language": "en", "url": "https://stackoverflow.com/questions/129958", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How can I convert a JTS-Geometry into an AWT-Shape? Is it possible to convert a com.vividsolutions.jts.geom.Geometry (or a subclass of it) into a class that implements java.awt.Shape? Which library or method can I use to achieve that goal? A: Also have a look at ShapeWriter provided by the JTS library. I used the following code snipped to convert jts geometry objects into an awt shape. import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.Shape; import javax.swing.JFrame; import javax.swing.JPanel; import com.vividsolutions.jts.awt.ShapeWriter; import com.vividsolutions.jts.geom.Coordinate; import com.vividsolutions.jts.geom.GeometryFactory; import com.vividsolutions.jts.geom.LineString; import com.vividsolutions.jts.geom.Polygon; public class Paint extends JPanel{ public void paint(Graphics g) { Coordinate[] coords = new Coordinate[] {new Coordinate(400, 0), new Coordinate(200, 200), new Coordinate(400, 400), new Coordinate(600, 200), new Coordinate(400, 0) }; Polygon polygon = new GeometryFactory().createPolygon(coords); LineString ls = new GeometryFactory().createLineString(new Coordinate[] {new Coordinate(20, 20), new Coordinate(200, 20)}); ShapeWriter sw = new ShapeWriter(); Shape polyShape = sw.toShape(polygon); Shape linShape = sw.toShape(ls); ((Graphics2D) g).draw(polyShape); ((Graphics2D) g).draw(linShape); } public static void main(String[] args) { JFrame f = new JFrame(); f.getContentPane().add(new Paint()); f.setSize(700, 700); f.setVisible(true); } } Edit: The result looks like this image A: According to: http://lists.jump-project.org/pipermail/jts-devel/2007-May/001954.html There's a class: com.vividsolutions.jump.workbench.ui.renderer.java2D.Java2DConverter which can do it?
{ "language": "en", "url": "https://stackoverflow.com/questions/129968", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Convert an image to XAML? Does anyone know of any way to convert a simple gif to xaml? E.G. A tool that would look at an image and create elipses, rectangles and paths based upon a gif / jpg / bitmap? A: Inkscape can trace bitmaps, and can save directly to XAML. And, it happens to be free. I've used it to trace a lot of bitmaps and it's worked really well for me. A: A combination of Vector Magic followed by ViewerSVG produces the best quality results for me. A: Illustrator has a trace tool which will do this a cheaper option might be http://vectormagic.com it will export a svg that you should be able to convert to xaml A: With this online converter you can convert an image to SVG Format. then download Converted File and open it in a text File Editor then you can easily copy path data image.online-convert
{ "language": "en", "url": "https://stackoverflow.com/questions/129972", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How to call Java code from C#? We've developed a Java application and would like to use this application from a C# client. The application has dependencies on Spring, Log4j, ... What would be the most efficient mechanism - make DLL(s) from Java code, ... - to achieve this ? A: IKVM! It is really awesome. The only problem is that it DOES add ~30MB to the project. log4net and Spring .NET are available as well, but if living with existing code, go the ikvm route. A: alternatively you could write a webservice/xmlrpc layer between the two. I seem to remember that there is a tool calles grassshopper that will compile your .Net code into JVM bytecode. I've also heard good things about IKVM A: There are so many options, * *sockets *web services *Message bus *Use a/any database! (sorry if sound silly) Here's a discussion which may be handy: https://gridwizard.wordpress.com/2015/01/14/java-and-dotnet-interop Really depends on what you're building! A: I am author of jni4net, open source interprocess bridge between JVM and CLR. It's build on top of JNI and PInvoke. No C/C++ code needed. I hope it will help you.
{ "language": "en", "url": "https://stackoverflow.com/questions/129989", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: Initial skeleton for Firefox extensions? I always seem to have a hard time starting a new Firefox extension. Can anyone recommend a good extension skeleton, scaffold, or code generator? Ideally one that follows all the best practices for FF extensions? A: This one works nice: https://addons.mozilla.org/en-US/developers/tools/builder Of course googling for "firefox extension generator" is where I found it ;) A: Look up this eclipse plugin: SPKet It will take care of the skeleton and 50 other things, you will love it.
{ "language": "en", "url": "https://stackoverflow.com/questions/129993", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Need to run a TCP server besides a Rails app I have a Rails 2.0.2 application running with a postgresql db. The machine will receive data on a TCP port. I already have coded a working ruby multithreaded tcp server to receive the requests, but I need this code to run alongside my Rails app. So I guess I need to know how to span a new process inside Rails, or how to create a worker thread that will run my threaded tcp server loop. My ruby tcp server could have access to ActiveRecord, but it's not necessary (I can always create an http request, posting the received data to the original Rails server) A: Why complicate things? Just run the applications -- your TCP server and the Rails application -- side by side. Either pull the model tier (and ActiveRecord) into your TCP server (svn::externals or Piston might work well for that) and let the communication between the two applications happen through the database, or let the Rails application be the "master" and communicate with it via HTTP as you suggest. To turn a Ruby application into a Windows service, see the win32-service gem available from the win32utils project: http://rubyforge.org/projects/win32utils/ A: I need the tcp server to run as a service on a Windows 2003 server. I use the mongrel_service to load Rails as a service, and I do not know of a way to do the same for pure ruby code. If I could get my tcp server started when the computer boots, I will look into your solution (which seems pretty good nevertheless). A: Don't make your Rails app responsible for the state of TCP server app. It's really not very well-suited to doing that -- and there's probably no reason that they need to be started in absolute lock-step with each other. Use monit or something to monitor both server processes. It's impossible to say for sure without knowing more of your app architecture, but I'd suggest using ActiveRecord and the database to communicate between your servers instead of HTTP. This way, even if your Rails app is down for some reason, your other server can still process requests. It'll also probably be a bit snappier.
{ "language": "en", "url": "https://stackoverflow.com/questions/130015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Dropdownlist control with s for asp.net (webforms)? Can anyone recommend a dropdownlist control for asp.net (3.5) that can render option groups? Thanks A: The Sharp Pieces project on CodePlex solves this (and several other) control limitations. A: I use the reflector to see why is not supported. There is why. In the render method of the ListControl no condition is there to create the optgroup. protected internal override void RenderContents(HtmlTextWriter writer) { ListItemCollection items = this.Items; int count = items.Count; if (count > 0) { bool flag = false; for (int i = 0; i < count; i++) { ListItem item = items[i]; if (item.Enabled) { writer.WriteBeginTag("option"); if (item.Selected) { if (flag) { this.VerifyMultiSelect(); } flag = true; writer.WriteAttribute("selected", "selected"); } writer.WriteAttribute("value", item.Value, true); if (item.HasAttributes) { item.Attributes.Render(writer); } if (this.Page != null) { this.Page.ClientScript.RegisterForEventValidation(this.UniqueID, item.Value); } writer.Write('>'); HttpUtility.HtmlEncode(item.Text, writer); writer.WriteEndTag("option"); writer.WriteLine(); } } } } So i create my own dropdown Control with an override of the method RenderContents. There is my control. Is working fine. I use exactly the same code of Microsoft, just add a little condition to support listItem having attribute optgroup to create an optgroup and not a option. Give me some feed back public class DropDownListWithOptionGroup : DropDownList { public const string OptionGroupTag = "optgroup"; private const string OptionTag = "option"; protected override void RenderContents(System.Web.UI.HtmlTextWriter writer) { ListItemCollection items = this.Items; int count = items.Count; string tag; string optgroupLabel; if (count > 0) { bool flag = false; for (int i = 0; i < count; i++) { tag = OptionTag; optgroupLabel = null; ListItem item = items[i]; if (item.Enabled) { if (item.Attributes != null && item.Attributes.Count > 0 && item.Attributes[OptionGroupTag] != null) { tag = OptionGroupTag; optgroupLabel = item.Attributes[OptionGroupTag]; } writer.WriteBeginTag(tag); // NOTE(cboivin): Is optionGroup if (!string.IsNullOrEmpty(optgroupLabel)) { writer.WriteAttribute("label", optgroupLabel); } else { if (item.Selected) { if (flag) { this.VerifyMultiSelect(); } flag = true; writer.WriteAttribute("selected", "selected"); } writer.WriteAttribute("value", item.Value, true); if (item.Attributes != null && item.Attributes.Count > 0) { item.Attributes.Render(writer); } if (this.Page != null) { this.Page.ClientScript.RegisterForEventValidation(this.UniqueID, item.Value); } } writer.Write('>'); HttpUtility.HtmlEncode(item.Text, writer); writer.WriteEndTag(tag); writer.WriteLine(); } } } } } A: Based on the posts above I've created a c# version of this control with working view state. public const string OptionGroupTag = "optgroup"; private const string OptionTag = "option"; protected override void RenderContents(System.Web.UI.HtmlTextWriter writer) { ListItemCollection items = this.Items; int count = items.Count; string tag; string optgroupLabel; if (count > 0) { bool flag = false; for (int i = 0; i < count; i++) { tag = OptionTag; optgroupLabel = null; ListItem item = items[i]; if (item.Enabled) { if (item.Attributes != null && item.Attributes.Count > 0 && item.Attributes[OptionGroupTag] != null) { tag = OptionGroupTag; optgroupLabel = item.Attributes[OptionGroupTag]; } writer.WriteBeginTag(tag); // NOTE(cboivin): Is optionGroup if (!string.IsNullOrEmpty(optgroupLabel)) { writer.WriteAttribute("label", optgroupLabel); } else { if (item.Selected) { if (flag) { this.VerifyMultiSelect(); } flag = true; writer.WriteAttribute("selected", "selected"); } writer.WriteAttribute("value", item.Value, true); if (item.Attributes != null && item.Attributes.Count > 0) { item.Attributes.Render(writer); } if (this.Page != null) { this.Page.ClientScript.RegisterForEventValidation(this.UniqueID, item.Value); } } writer.Write('>'); HttpUtility.HtmlEncode(item.Text, writer); writer.WriteEndTag(tag); writer.WriteLine(); } } } } protected override object SaveViewState() { object[] state = new object[this.Items.Count + 1]; object baseState = base.SaveViewState(); state[0] = baseState; bool itemHasAttributes = false; for (int i = 0; i < this.Items.Count; i++) { if (this.Items[i].Attributes.Count > 0) { itemHasAttributes = true; object[] attributes = new object[this.Items[i].Attributes.Count * 2]; int k = 0; foreach (string key in this.Items[i].Attributes.Keys) { attributes[k] = key; k++; attributes[k] = this.Items[i].Attributes[key]; k++; } state[i + 1] = attributes; } } if (itemHasAttributes) return state; return baseState; } protected override void LoadViewState(object savedState) { if (savedState == null) return; if (!(savedState.GetType().GetElementType() == null) && (savedState.GetType().GetElementType().Equals(typeof(object)))) { object[] state = (object[])savedState; base.LoadViewState(state[0]); for (int i = 1; i < state.Length; i++) { if (state[i] != null) { object[] attributes = (object[])state[i]; for (int k = 0; k < attributes.Length; k += 2) { this.Items[i - 1].Attributes.Add (attributes[k].ToString(), attributes[k + 1].ToString()); } } } } else { base.LoadViewState(savedState); } } I hope this helps some people :-) A: A more generic approach to Irfan's jQuery-based solution: backend private void _addSelectItem(DropDownList list, string title, string value, string group = null) { ListItem item = new ListItem(title, value); if (!String.IsNullOrEmpty(group)) { item.Attributes["data-category"] = group; } list.Items.Add(item); } ... _addSelectItem(dropDown, "Option 1", "1"); _addSelectItem(dropDown, "Option 2", "2", "Category"); _addSelectItem(dropDown, "Option 3", "3", "Category"); ... client var groups = {}; $("select option[data-category]").each(function () { groups[$.trim($(this).attr("data-category"))] = true; }); $.each(groups, function (c) { $("select option[data-category='"+c+"']").wrapAll('<optgroup label="' + c + '">'); }); A: I've used the standard control in the past, and just added a simple ControlAdapter for it that would override the default behavior so it could render <optgroup>s in certain places. This works great even if you have controls that don't need the special behavior, because the additional feature doesn't get in the way. Note that this was for a specific purpose and written in .Net 2.0, so it may not suit you as well, but it should at least give you a starting point. Also, you have to hook it up using a .browserfile in your project (see the end of the post for an example). 'This codes makes the dropdownlist control recognize items with "--" 'for the label or items with an OptionGroup attribute and render them 'as <optgroup> instead of <option>. Public Class DropDownListAdapter Inherits System.Web.UI.WebControls.Adapters.WebControlAdapter Protected Overrides Sub RenderContents(ByVal writer As HtmlTextWriter) Dim list As DropDownList = Me.Control Dim currentOptionGroup As String Dim renderedOptionGroups As New Generic.List(Of String) For Each item As ListItem In list.Items Page.ClientScript.RegisterForEventValidation(list.UniqueID, item.Value) If item.Attributes("OptionGroup") IsNot Nothing Then 'The item is part of an option group currentOptionGroup = item.Attributes("OptionGroup") If Not renderedOptionGroups.Contains(currentOptionGroup) Then 'the header was not written- do that first 'TODO: make this stack-based, so the same option group can be used more than once in longer select element (check the most-recent stack item instead of anything in the list) If (renderedOptionGroups.Count > 0) Then RenderOptionGroupEndTag(writer) 'need to close previous group End If RenderOptionGroupBeginTag(currentOptionGroup, writer) renderedOptionGroups.Add(currentOptionGroup) End If RenderListItem(item, writer) ElseIf item.Text = "--" Then 'simple separator RenderOptionGroupBeginTag("--", writer) RenderOptionGroupEndTag(writer) Else 'default behavior: render the list item as normal RenderListItem(item, writer) End If Next item If renderedOptionGroups.Count > 0 Then RenderOptionGroupEndTag(writer) End If End Sub Private Sub RenderOptionGroupBeginTag(ByVal name As String, ByVal writer As HtmlTextWriter) writer.WriteBeginTag("optgroup") writer.WriteAttribute("label", name) writer.Write(HtmlTextWriter.TagRightChar) writer.WriteLine() End Sub Private Sub RenderOptionGroupEndTag(ByVal writer As HtmlTextWriter) writer.WriteEndTag("optgroup") writer.WriteLine() End Sub Private Sub RenderListItem(ByVal item As ListItem, ByVal writer As HtmlTextWriter) writer.WriteBeginTag("option") writer.WriteAttribute("value", item.Value, True) If item.Selected Then writer.WriteAttribute("selected", "selected", False) End If For Each key As String In item.Attributes.Keys writer.WriteAttribute(key, item.Attributes(key)) Next key writer.Write(HtmlTextWriter.TagRightChar) HttpUtility.HtmlEncode(item.Text, writer) writer.WriteEndTag("option") writer.WriteLine() End Sub End Class Here's a C# implementation of the same Class: /* This codes makes the dropdownlist control recognize items with "--" * for the label or items with an OptionGroup attribute and render them * as <optgroup> instead of <option>. */ public class DropDownListAdapter : WebControlAdapter { protected override void RenderContents(HtmlTextWriter writer) { //System.Web.HttpContext.Current.Response.Write("here"); var list = (DropDownList)this.Control; string currentOptionGroup; var renderedOptionGroups = new List<string>(); foreach (ListItem item in list.Items) { Page.ClientScript.RegisterForEventValidation(list.UniqueID, item.Value); //Is the item part of an option group? if (item.Attributes["OptionGroup"] != null) { currentOptionGroup = item.Attributes["OptionGroup"]; //Was the option header already written, then just render the list item if (renderedOptionGroups.Contains(currentOptionGroup)) RenderListItem(item, writer); //The header was not written,do that first else { //Close previous group if (renderedOptionGroups.Count > 0) RenderOptionGroupEndTag(writer); RenderOptionGroupBeginTag(currentOptionGroup, writer); renderedOptionGroups.Add(currentOptionGroup); RenderListItem(item, writer); } } //Simple separator else if (item.Text == "--") { RenderOptionGroupBeginTag("--", writer); RenderOptionGroupEndTag(writer); } //Default behavior, render the list item as normal else RenderListItem(item, writer); } if (renderedOptionGroups.Count > 0) RenderOptionGroupEndTag(writer); } private void RenderOptionGroupBeginTag(string name, HtmlTextWriter writer) { writer.WriteBeginTag("optgroup"); writer.WriteAttribute("label", name); writer.Write(HtmlTextWriter.TagRightChar); writer.WriteLine(); } private void RenderOptionGroupEndTag(HtmlTextWriter writer) { writer.WriteEndTag("optgroup"); writer.WriteLine(); } private void RenderListItem(ListItem item, HtmlTextWriter writer) { writer.WriteBeginTag("option"); writer.WriteAttribute("value", item.Value, true); if (item.Selected) writer.WriteAttribute("selected", "selected", false); foreach (string key in item.Attributes.Keys) writer.WriteAttribute(key, item.Attributes[key]); writer.Write(HtmlTextWriter.TagRightChar); HttpUtility.HtmlEncode(item.Text, writer); writer.WriteEndTag("option"); writer.WriteLine(); } } My browser file was named "App_Browsers\BrowserFile.browser" and looked like this: <!-- You can find existing browser definitions at <windir>\Microsoft.NET\Framework\<ver>\CONFIG\Browsers --> <browsers> <browser refID="Default"> <controlAdapters> <adapter controlType="System.Web.UI.WebControls.DropDownList" adapterType="DropDownListAdapter" /> </controlAdapters> </browser> </browsers> A: I've done this using an outer repeater for the select and optgroups and an inner repeater for the items within that group: <asp:Repeater ID="outerRepeater" runat="server"> <HeaderTemplate> <select id="<%= outerRepeater.ClientID %>"> </HeaderTemplate> <ItemTemplate> <optgroup label="<%# Eval("GroupText") %>"> <asp:Repeater runat="server" DataSource='<%# Eval("Items") %>'> <ItemTemplate> <option value="<%# Eval("Value") %>"><%# Eval("Text") %></option> </ItemTemplate> </asp:Repeater> </optgroup> </ItemTemplate> <FooterTemplate> </select> </FooterTemplate> </asp:Repeater> The data source for outerRepeater is a simple grouping as follows: var data = (from o in thingsToDisplay group oby GetAlphaGrouping(o.Name) into g orderby g.Key select new { Alpha = g.Key, Items = g }); And to get the alpha grouping character: private string GetAlphaGrouping(string value) { string firstChar = value.Substring(0, 1).ToUpper(); int unused; if (int.TryParse(firstChar, out unused)) return "#"; return firstChar.ToUpper(); } It's not a perfect solution but it works. The correct solution would be to no longer use WebForms, but us MVC instead. :) A: I have used JQuery to achieve this task. I first added an new attribute for every ListItem from the backend and then used that attribute in JQuery wrapAll() method to create groups... C#: foreach (ListItem item in ((DropDownList)sender).Items) { if (System.Int32.Parse(item.Value) < 5) item.Attributes.Add("classification", "LessThanFive"); else item.Attributes.Add("classification", "GreaterThanFive"); } JQuery: $(document).ready(function() { //Create groups for dropdown list $("select.listsmall option[@classification='LessThanFive']") .wrapAll("&lt;optgroup label='Less than five'&gt;"); $("select.listsmall option[@classification='GreaterThanFive']") .wrapAll("&lt;optgroup label='Greater than five'&gt;"); }); A: Thanks Joel! everyone... here's C# version if you want it: using System; using System.Web.UI.WebControls.Adapters; using System.Web.UI; using System.Web.UI.WebControls; using System.Collections.Generic; using System.Web; //This codes makes the dropdownlist control recognize items with "--"' //for the label or items with an OptionGroup attribute and render them' //as instead of .' public class DropDownListAdapter : WebControlAdapter { protected override void RenderContents(HtmlTextWriter writer) { DropDownList list = Control as DropDownList; string currentOptionGroup; List renderedOptionGroups = new List(); foreach(ListItem item in list.Items) { if (item.Attributes["OptionGroup"] != null) { //'The item is part of an option group' currentOptionGroup = item.Attributes["OptionGroup"]; //'the option header was already written, just render the list item' if(renderedOptionGroups.Contains(currentOptionGroup)) RenderListItem(item, writer); else { //the header was not written- do that first' if (renderedOptionGroups.Count > 0) RenderOptionGroupEndTag(writer); //'need to close previous group' RenderOptionGroupBeginTag(currentOptionGroup, writer); renderedOptionGroups.Add(currentOptionGroup); RenderListItem(item, writer); } } else if (item.Text == "--") //simple separator { RenderOptionGroupBeginTag("--", writer); RenderOptionGroupEndTag(writer); } else { //default behavior: render the list item as normal' RenderListItem(item, writer); } } if(renderedOptionGroups.Count > 0) RenderOptionGroupEndTag(writer); } private void RenderOptionGroupBeginTag(string name, HtmlTextWriter writer) { writer.WriteBeginTag("optgroup"); writer.WriteAttribute("label", name); writer.Write(HtmlTextWriter.TagRightChar); writer.WriteLine(); } private void RenderOptionGroupEndTag(HtmlTextWriter writer) { writer.WriteEndTag("optgroup"); writer.WriteLine(); } private void RenderListItem(ListItem item, HtmlTextWriter writer) { writer.WriteBeginTag("option"); writer.WriteAttribute("value", item.Value, true); if (item.Selected) writer.WriteAttribute("selected", "selected", false); foreach (string key in item.Attributes.Keys) writer.WriteAttribute(key, item.Attributes[key]); writer.Write(HtmlTextWriter.TagRightChar); HttpUtility.HtmlEncode(item.Text, writer); writer.WriteEndTag("option"); writer.WriteLine(); } } A: The above code renders the end tag for the optgroup before any of the options, so the options don't get indented like they should in addition to the markup not properly representing the grouping. Here's my slightly modified version of Tom's code: public class ExtendedDropDownList : System.Web.UI.WebControls.DropDownList { public const string OptionGroupTag = "optgroup"; private const string OptionTag = "option"; protected override void RenderContents(System.Web.UI.HtmlTextWriter writer) { ListItemCollection items = this.Items; int count = items.Count; string tag; string optgroupLabel; if (count > 0) { bool flag = false; string prevOptGroup = null; for (int i = 0; i < count; i++) { tag = OptionTag; optgroupLabel = null; ListItem item = items[i]; if (item.Enabled) { if (item.Attributes != null && item.Attributes.Count > 0 && item.Attributes[OptionGroupTag] != null) { optgroupLabel = item.Attributes[OptionGroupTag]; if (prevOptGroup != optgroupLabel) { if (prevOptGroup != null) { writer.WriteEndTag(OptionGroupTag); } writer.WriteBeginTag(OptionGroupTag); if (!string.IsNullOrEmpty(optgroupLabel)) { writer.WriteAttribute("label", optgroupLabel); } writer.Write('>'); } item.Attributes.Remove(OptionGroupTag); prevOptGroup = optgroupLabel; } else { if (prevOptGroup != null) { writer.WriteEndTag(OptionGroupTag); } prevOptGroup = null; } writer.WriteBeginTag(tag); if (item.Selected) { if (flag) { this.VerifyMultiSelect(); } flag = true; writer.WriteAttribute("selected", "selected"); } writer.WriteAttribute("value", item.Value, true); if (item.Attributes != null && item.Attributes.Count > 0) { item.Attributes.Render(writer); } if (optgroupLabel != null) { item.Attributes.Add(OptionGroupTag, optgroupLabel); } if (this.Page != null) { this.Page.ClientScript.RegisterForEventValidation(this.UniqueID, item.Value); } writer.Write('>'); HttpUtility.HtmlEncode(item.Text, writer); writer.WriteEndTag(tag); writer.WriteLine(); if (i == count - 1) { if (prevOptGroup != null) { writer.WriteEndTag(OptionGroupTag); } } } } } } protected override object SaveViewState() { object[] state = new object[this.Items.Count + 1]; object baseState = base.SaveViewState(); state[0] = baseState; bool itemHasAttributes = false; for (int i = 0; i < this.Items.Count; i++) { if (this.Items[i].Attributes.Count > 0) { itemHasAttributes = true; object[] attributes = new object[this.Items[i].Attributes.Count * 2]; int k = 0; foreach (string key in this.Items[i].Attributes.Keys) { attributes[k] = key; k++; attributes[k] = this.Items[i].Attributes[key]; k++; } state[i + 1] = attributes; } } if (itemHasAttributes) return state; return baseState; } protected override void LoadViewState(object savedState) { if (savedState == null) return; if (!(savedState.GetType().GetElementType() == null) && (savedState.GetType().GetElementType().Equals(typeof(object)))) { object[] state = (object[])savedState; base.LoadViewState(state[0]); for (int i = 1; i < state.Length; i++) { if (state[i] != null) { object[] attributes = (object[])state[i]; for (int k = 0; k < attributes.Length; k += 2) { this.Items[i - 1].Attributes.Add (attributes[k].ToString(), attributes[k + 1].ToString()); } } } } else { base.LoadViewState(savedState); } } } Use it like this: ListItem item1 = new ListItem("option1"); item1.Attributes.Add("optgroup", "CatA"); ListItem item2 = new ListItem("option2"); item2.Attributes.Add("optgroup", "CatA"); ListItem item3 = new ListItem("option3"); item3.Attributes.Add("optgroup", "CatB"); ListItem item4 = new ListItem("option4"); item4.Attributes.Add("optgroup", "CatB"); ListItem item5 = new ListItem("NoOptGroup"); ddlTest.Items.Add(item1); ddlTest.Items.Add(item2); ddlTest.Items.Add(item3); ddlTest.Items.Add(item4); ddlTest.Items.Add(item5); and here's the generated markup (indented for ease of viewing): <select name="ddlTest" id="Select1"> <optgroup label="CatA"> <option selected="selected" value="option1">option1</option> <option value="option2">option2</option> </optgroup> <optgroup label="CatB"> <option value="option3">option3</option> <option value="option4">option4</option> </optgroup> <option value="NoOptGroup">NoOptGroup</option> </select> A: As the answers above that overload the RenderContents method do work. You also have to remember to alter the viewstate. I have come into an issue when using the non-altered viewstate in UpdatePanels. This has parts taken from the Sharp Pieces Project. Protected Overloads Overrides Sub RenderContents(ByVal writer As HtmlTextWriter) Dim list As DropDownList = Me Dim currentOptionGroup As String Dim renderedOptionGroups As New List(Of String)() For Each item As ListItem In list.Items If item.Attributes("OptionGroup") Is Nothing Then RenderListItem(item, writer) Else currentOptionGroup = item.Attributes("OptionGroup") If renderedOptionGroups.Contains(currentOptionGroup) Then RenderListItem(item, writer) Else If renderedOptionGroups.Count > 0 Then RenderOptionGroupEndTag(writer) End If RenderOptionGroupBeginTag(currentOptionGroup, writer) renderedOptionGroups.Add(currentOptionGroup) RenderListItem(item, writer) End If End If Next If renderedOptionGroups.Count > 0 Then RenderOptionGroupEndTag(writer) End If End Sub Private Sub RenderOptionGroupBeginTag(ByVal name As String, ByVal writer As HtmlTextWriter) writer.WriteBeginTag("optgroup") writer.WriteAttribute("label", name) writer.Write(HtmlTextWriter.TagRightChar) writer.WriteLine() End Sub Private Sub RenderOptionGroupEndTag(ByVal writer As HtmlTextWriter) writer.WriteEndTag("optgroup") writer.WriteLine() End Sub Private Sub RenderListItem(ByVal item As ListItem, ByVal writer As HtmlTextWriter) writer.WriteBeginTag("option") writer.WriteAttribute("value", item.Value, True) If item.Selected Then writer.WriteAttribute("selected", "selected", False) End If For Each key As String In item.Attributes.Keys writer.WriteAttribute(key, item.Attributes(key)) Next writer.Write(HtmlTextWriter.TagRightChar) HttpUtility.HtmlEncode(item.Text, writer) writer.WriteEndTag("option") writer.WriteLine() End Sub Protected Overrides Function SaveViewState() As Object ' Create an object array with one element for the CheckBoxList's ' ViewState contents, and one element for each ListItem in skmCheckBoxList Dim state(Me.Items.Count + 1 - 1) As Object 'stupid vb array Dim baseState As Object = MyBase.SaveViewState() state(0) = baseState ' Now, see if we even need to save the view state Dim itemHasAttributes As Boolean = False For i As Integer = 0 To Me.Items.Count - 1 If Me.Items(i).Attributes.Count > 0 Then itemHasAttributes = True ' Create an array of the item's Attribute's keys and values Dim attribKV(Me.Items(i).Attributes.Count * 2 - 1) As Object 'stupid vb array Dim k As Integer = 0 For Each key As String In Me.Items(i).Attributes.Keys attribKV(k) = key k += 1 attribKV(k) = Me.Items(i).Attributes(key) k += 1 Next state(i + 1) = attribKV End If Next ' return either baseState or state, depending on whether or not ' any ListItems had attributes If (itemHasAttributes) Then Return state Else Return baseState End If End Function Protected Overrides Sub LoadViewState(ByVal savedState As Object) If savedState Is Nothing Then Return ' see if savedState is an object or object array If Not savedState.GetType.GetElementType() Is Nothing AndAlso savedState.GetType.GetElementType().Equals(GetType(Object)) Then ' we have just the base state MyBase.LoadViewState(savedState(0)) 'we have an array of items with attributes Dim state() As Object = savedState MyBase.LoadViewState(state(0)) '/ load the base state For i As Integer = 1 To state.Length - 1 If Not state(i) Is Nothing Then ' Load back in the attributes Dim attribKV() As Object = state(i) For k As Integer = 0 To attribKV.Length - 1 Step +2 Me.Items(i - 1).Attributes.Add(attribKV(k).ToString(), attribKV(k + 1).ToString()) Next End If Next Else 'load it normal MyBase.LoadViewState(savedState) End If End Sub A: // How to use: // 1. Create items in a select element or asp:DropDownList control // 2. Set value of an option or ListItem to "_group_", those will be converted to optgroups // 3. On page onload call createOptGroups(domElement), for example like this: // - var lst = document.getElementById('lst'); // - createOptGroups(lst, "_group_"); // 4. You can change groupMarkerValue to anything, I used "_group_" function createOptGroups(lst, groupMarkerValue) { // Get an array containing the options var childNodes = []; for (var i = 0; i < lst.options.length; i++) childNodes.push(lst.options[i]); // Get the selected element so we can preserve selection var selectedIndex = lst.selectedIndex; var selectedChild = childNodes[selectedIndex]; var selectedValue = selectedChild.value; // Remove all elements while (lst.hasChildNodes()) lst.removeChild(lst.childNodes[0]); // Go through the array of options and convert some into groups var group = null; for (var i = 0; i < childNodes.length; i++) { var node = childNodes[i]; if (node.value == groupMarkerValue) { group = document.createElement("optgroup"); group.label = node.text; group.value = groupMarkerValue; lst.appendChild(group); continue; } // Add to group or directly under list (group == null ? lst : group).appendChild(node); } // Preserve selected, no support for multi-selection here, sorry selectedChild.selected = true; } Tested on Chrome 16, Firefox 9 and IE8. A: I expanded upon mkl's answer to make a DropDownList control to which you can DataBind. Here's the result (which may be subject to improvement): public class UcDropDownListWithOptGroup : DropDownList { public const string OptionGroupTag = "optgroup"; private const string OptionTag = "option"; protected override void RenderContents(HtmlTextWriter writer) { ListItemCollection items = this.Items; int count = items.Count; string tag; string optgroupLabel; if (count > 0) { bool flag = false; string prevOptGroup = null; for (int i = 0; i < count; i++) { tag = OptionTag; optgroupLabel = null; ListItem item = items[i]; if (item.Enabled) { if (item.Attributes != null && item.Attributes.Count > 0 && item.Attributes["data-optgroup"] != null) { optgroupLabel = item.Attributes["data-optgroup"]; if (prevOptGroup != optgroupLabel) { if (prevOptGroup != null) { writer.WriteEndTag(OptionGroupTag); } writer.WriteBeginTag(OptionGroupTag); if (!string.IsNullOrEmpty(optgroupLabel)) { writer.WriteAttribute("label", optgroupLabel); } writer.Write('>'); } item.Attributes.Remove(OptionGroupTag); prevOptGroup = optgroupLabel; } else { if (prevOptGroup != null) { writer.WriteEndTag(OptionGroupTag); } prevOptGroup = null; } writer.WriteBeginTag(tag); if (item.Selected) { if (flag) { this.VerifyMultiSelect(); } flag = true; writer.WriteAttribute("selected", "selected"); } writer.WriteAttribute("value", item.Value, true); if (item.Attributes != null && item.Attributes.Count > 0) { item.Attributes.Render(writer); } if (optgroupLabel != null) { item.Attributes.Add(OptionGroupTag, optgroupLabel); } if (this.Page != null) { this.Page.ClientScript.RegisterForEventValidation(this.UniqueID, item.Value); } writer.Write('>'); HttpUtility.HtmlEncode(item.Text, writer); writer.WriteEndTag(tag); writer.WriteLine(); if (i == count - 1) { if (prevOptGroup != null) { writer.WriteEndTag(OptionGroupTag); } } } } } } protected override object SaveViewState() { object[] state = new object[this.Items.Count + 1]; object baseState = base.SaveViewState(); state[0] = baseState; bool itemHasAttributes = false; for (int i = 0; i < this.Items.Count; i++) { if (this.Items[i].Attributes.Count > 0) { itemHasAttributes = true; object[] attributes = new object[this.Items[i].Attributes.Count * 2]; int k = 0; foreach (string key in this.Items[i].Attributes.Keys) { attributes[k] = key; k++; attributes[k] = this.Items[i].Attributes[key]; k++; } state[i + 1] = attributes; } } if (itemHasAttributes) return state; return baseState; } protected override void LoadViewState(object savedState) { if (savedState == null) return; if (!(savedState.GetType().GetElementType() == null) && (savedState.GetType().GetElementType().Equals(typeof(object)))) { object[] state = (object[])savedState; base.LoadViewState(state[0]); for (int i = 1; i < state.Length; i++) { if (state[i] != null) { object[] attributes = (object[])state[i]; for (int k = 0; k < attributes.Length; k += 2) { this.Items[i - 1].Attributes.Add (attributes[k].ToString(), attributes[k + 1].ToString()); } } } } else { base.LoadViewState(savedState); } } protected override void PerformDataBinding(IEnumerable dataSource) { base.PerformDataBinding(dataSource); if (!string.IsNullOrWhiteSpace(DataOptGroupField) && OptGroupTitles != null) { var currentItems = Items; var dataSourceItems = dataSource.Cast<object>().ToList(); var staticItemsCount = Items.Count - dataSourceItems.Count; for (var i = staticItemsCount; i < Items.Count; i++) { var dataSourceItem = dataSourceItems[i - staticItemsCount]; var optGroupValue = DataBinder.GetPropertyValue(dataSourceItem, DataOptGroupField); currentItems[i].Attributes.Add("data-optgroup", OptGroupTitles[optGroupValue]); } } } public Dictionary<object, string> OptGroupTitles { get; set; } public string DataOptGroupField { get; set; } } Things to note: * *You can set a DataOptGroupField property, similiar to DataTextField and DataValueField. It defines which property of the databound item it should take into consideration for determining the correct optgroup. *You can set a OptGroupTitles property which defines to which optgroup label the property defined in DataOptGroupField should be translated. As an example, take the following code in your page or user control: <MyControls:UcDropDownListWithOptGroup runat="server" DataSourceID="dsX" DataTextField="MyDataTextField" DataValueField="MyDataValueField" DataOptGroupField="IsActive" OptGroupTitles='<%# MyGroupTitles %>' /> Where the 'IsActive' property of the databound item is a boolean. In the code-behind of the page or user control, you define the following property: public Dictionary<object, string> MyGroupTitles { get { return new Dictionary<object, string> { { true, "Active" }, { false, "Inactive" } }; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/130020", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "65" }
Q: How can I make two browser windows share the same "session"? I have an app that needs to open a new window (in the same domain) so the user can view a report, but on some browsers* the new window doesn't share the non-persistent cookie of the original window, which causes the user to have to sign in again. Is there anything I can do to stop the user having to sign in again in the new window? *In fact, in IE7 it is sporadic - sometimes new windows share cookies, sometimes not. A: I thought IE7 shared non-persistent cookies with tabs in the same window, as well as windows that were generated from the current window (whether or not this is the same for manual opens like File->New, or programmatic script opens, I'm not sure), but that fresh instances did not. Firefox shares them across all windows, regardless of how they were opened. I've always assumed that this is just the way it is, and you'd have to use persistent cookies, cookie-less sessions, or develop a single sign-on/ticketing mechanism to work around it. A: IE7 does seem to generate new processes with a different algorithm than IE6, and can cause issues with session cookies. The most reliable solution is probably going to be to architect around it - either with cookieless sessions, a persistent cookie, or just serializing the data you need in the page. A: I'm using ASP.NET and relying on the behaviour of sessions being shared across browser windows and it's working for me. In fact, I'm even using it for the same reason as you to show a report in the new window :) A: They should share cookies. That has been my experience in the past. I'll edit once I've had a play. A: Could it be related to how you are opening the window, e.g. - JavaScript vs. target tag?
{ "language": "en", "url": "https://stackoverflow.com/questions/130021", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Multi-line string in a PropertyGrid Is there a built-in editor for a multi-line string in a PropertyGrid. A: I found that System.Design.dll has System.ComponentModel.Design.MultilineStringEditor which can be used as follows: public class Stuff { [Editor(typeof(MultilineStringEditor), typeof(UITypeEditor))] public string MultiLineProperty { get; set; } } A: No, you will need to create what's called a modal UI type editor. You'll need to create a class that inherits from UITypeEditor. This is basically a form that gets shown when you click on the ellipsis button on the right side of the property you are editing. The only drawback I found, was that I needed to decorate the specific string property with a specific attribute. It's been a while since I had to do that. I got this information from a book by Chris Sells called "Windows Forms Programming in C#" There's a commercial propertygrid called Smart PropertyGrid.NET by VisualHint. A: We need to write our custom editor to get the multiline support in property grid. Here is the customer text editor class implemented from UITypeEditor public class MultiLineTextEditor : UITypeEditor { private IWindowsFormsEditorService _editorService; public override UITypeEditorEditStyle GetEditStyle(ITypeDescriptorContext context) { return UITypeEditorEditStyle.DropDown; } public override object EditValue(ITypeDescriptorContext context, IServiceProvider provider, object value) { _editorService = (IWindowsFormsEditorService)provider.GetService(typeof(IWindowsFormsEditorService)); TextBox textEditorBox = new TextBox(); textEditorBox.Multiline = true; textEditorBox.ScrollBars = ScrollBars.Vertical; textEditorBox.Width = 250; textEditorBox.Height = 150; textEditorBox.BorderStyle = BorderStyle.None; textEditorBox.AcceptsReturn = true; textEditorBox.Text = value as string; _editorService.DropDownControl(textEditorBox); return textEditorBox.Text; } } Write your custom property grid and apply this Editor attribute to the property class CustomPropertyGrid { private string multiLineStr = string.Empty; [Editor(typeof(MultiLineTextEditor), typeof(UITypeEditor))] public string MultiLineStr { get { return multiLineStr; } set { multiLineStr = value; } } } In main form assign this object propertyGrid1.SelectedObject = new CustomPropertyGrid(); A: Yes. I don't quite remember how it is called, but look at the Items property editor for something like ComboBox Edited: As of @fryguybob, ComboBox.Items uses the System.Windows.Forms.Design.ListControlStringCollectionEditor
{ "language": "en", "url": "https://stackoverflow.com/questions/130032", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: How to Limit Download Speeds from my Website on my IIS Windows Server? When people download files from my website, I don't want them to be able to download faster than 300KB/sec per file. Is there anyway to do this? I'm running IIS 6.0 on Windows Server 2003. A: You can't limit download speed but you can limit the overall traffic to a particular website: * *Open IIS MMC *Select Website *Select Performance tab *Enable 'Bandwidth throttling' A: Write a script that transfer the data in chunks. After 300KB you wait until 1 seconds is consumed. A: I just found this but I haven't had time to try it out myself IIS Bit Rate Throttlling A: I agree with Horcrux (cant vote it as dont have enough rep) if the file is less than 300KB, then this wont work, but for large files, then adverage over the course of the whole file download will be 300Kbps... I'm assuming the idea is like a rapidshare idea, premium users will have full speed downloads? Also, while one thread(user) is waiting for a second, another thread can be downloading. Queue the downloads, and only let X amount run at the same time, and your away in a hack! A: Within website properties in IIS 6.0 there is a Performance tab and the first setting is Bandwith throttling which allows you to set the maximum bandwidth value in kilobytes per second. It also has this note; For bandwidth throttling to function, IIS needs to install Windows Packet Scheduler. I'm guessing using this setting would mean having your downloads on a separate site so you can throttle that but maintain full bandwidth to your normal content. A: For IIS 10, go to IIS Manager and you will find a your setting under the header Media Services > Bit Rate Throttling A: Reduce the speed of you Internet connection.
{ "language": "en", "url": "https://stackoverflow.com/questions/130034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: how are serial generators / cracks developed? I mean, I always was wondered about how the hell somebody can develop algorithms to break/cheat the constraints of legal use in many shareware programs out there. Just for curiosity. A: Nils's post deals with key generators. For cracks, usually you find a branch point and invert (or remove the condition) the logic. For example, you'll test to see if the software is registered, and the test may return zero if so, and then jump accordingly. You can change the "jump if equals zero (je)" to "jump if not-equals zero (jne)" by modifying a single byte. Or you can write no-operations over various portions of the code that do things that you don't want to do. Compiled programs can be disassembled and with enough time, determined people can develop binary patches. A crack is simply a binary patch to get the program to behave differently. A: First, most copy-protection schemes aren't terribly well advanced, which is why you don't see a lot of people rolling their own these days. There are a few methods used to do this. You can step through the code in a debugger, which does generally require a decent knowledge of assembly. Using that you can get an idea of where in the program copy protection/keygen methods are called. With that, you can use a disassembler like IDA Pro to analyze the code more closely and try to understand what is going on, and how you can bypass it. I've cracked time-limited Betas before by inserting NOOP instructions over the date-check. It really just comes down to a good understanding of software and a basic understanding of assembly. Hak5 did a two-part series on the first two episodes this season on kind of the basics of reverse engineering and cracking. It's really basic, but it's probably exactly what you're looking for. A: Apart from being illegal, it's a very complex task. Speaking just at a teoretical level the common way is to disassemble the program to crack and try to find where the key or the serialcode is checked. Easier said than done since any serious protection scheme will check values in multiple places and also will derive critical information from the serial key for later use so that when you think you guessed it, the program will crash. To create a crack you have to identify all the points where a check is done and modify the assembly code appropriately (often inverting a conditional jump or storing costants into memory locations). To create a keygen you have to understand the algorithm and write a program to re-do the exact same calculation (I remember an old version of MS Office whose serial had a very simple rule, the sum of the digit should have been a multiple of 7, so writing the keygen was rather trivial). Both activities requires you to follow the execution of the application into a debugger and try to figure out what's happening. And you need to know the low level API of your Operating System. Some heavily protected application have the code encrypted so that the file can't be disassembled. It is decrypted when loaded into memory but then they refuse to start if they detect that an in-memory debugger has started, In essence it's something that requires a very deep knowledge, ingenuity and a lot of time! Oh, did I mention that is illegal in most countries? If you want to know more, Google for the +ORC Cracking Tutorials they are very old and probably useless nowdays but will give you a good idea of what it means. Anyway, a very good reason to know all this is if you want to write your own protection scheme. A: A would-be cracker disassembles the program and looks for the "copy protection" bits, specifically for the algorithm that determines if a serial number is valid. From that code, you can often see what pattern of bits is required to unlock the functionality, and then write a generator to create numbers with those patterns. Another alternative is to look for functions that return "true" if the serial number is valid and "false" if it's not, then develop a binary patch so that the function always returns "true". Everything else is largely a variant on those two ideas. Copy protection is always breakable by definition - at some point you have to end up with executable code or the processor couldn't run it. A: The serial number you can just extract the algorithm and start throwing "Guesses" at it and look for a positive response. Computers are powerful, usually only takes a little while before it starts spitting out hits. As for hacking, I used to be able to step through programs at a high level and look for a point where it stopped working. Then you go back to the last "Call" that succeeded and step into it, then repeat. Back then, the copy protection was usually writing to the disk and seeing if a subsequent read succeeded (If so, the copy protection failed because they used to burn part of the floppy with a laser so it couldn't be written to). Then it was just a matter of finding the right call and hardcoding the correct return value from that call. I'm sure it's still similar, but they go through a lot of effort to hide the location of the call. Last one I tried I gave up because it kept loading code over the code I was single-stepping through, and I'm sure it's gotten lots more complicated since then. A: The bad guys search for the key-check code using a disassembler. This is relative easy if you know how to do this. Afterwards you translate the key-checking code to C or another language (this step is optional). Reversing the process of key-checking gives you a key-generator. If you know assembler it takes roughly a weekend to learn how to do this. I've done it just some years ago (never released anything though. It was just research for my game-development job. To write a hard to crack key you have to understand how people approach cracking). A: I wonder why they don't just distribute personalized binaries, where the name of the owner is stored somewhere (encrypted and obfuscated) in the binary or better distributed over the whole binary.. AFAIK Apple is doing this with the Music files from the iTunes store, however there it's far too easy, to remove the name from the files. A: I assume each crack is different, but I would guess in most cases somebody spends a lot of time in the debugger tracing the application in question. The serial generator takes that one step further by analyzing the algorithm that checks the serial number for validity and reverse engineers it.
{ "language": "en", "url": "https://stackoverflow.com/questions/130058", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: Developing an operating system for the x86 architecture I am planning to develop an operating system for the x86 architecture. * *What options of programming languages do I have? *What types of compilers are there available, preferably on a Windows environment? *Are there any good sources that will help me learn more about operating system development? *Is it better to test my operating system on a Virtual Machine or on physical hardware? Any suggestions? A: Language and compiler depend entirely on what you're attempting to accomplish. I would suggest, though, that you might be approaching the problem from too low a level. There are materials out there on operating system fundamentals. MIT has OpenCourseware on the subject. Read through Andrew Tannenbaum's Operating Systems series, and look at things like Minix. Get an idea for what's out there. Start tinkering with things. Borrow ideas, and see where they go. You can reinvent the wheel if you really want, but you'll learn more by building on the works of others. A: It doesn't really matter, what language you choose. If the language is Turing-complete, then you can write an OS in it. However, the expressiveness of the language will make certain kinds of designs very easy or very hard to implement. For example, the "liveliness" and dynamism of the old Smalltalk OSs depends on the fact that they are implemented in Smalltalk. You could do that in C, too, but it would probably be so hard that you wouldn't even think about it. JavaScript or Ruby OTOH would probably be a great fit. Microsoft Research's Singularity is another example. It simply couldn't be implemented in anything other than Sing#, Spec# and C# (or similar languages), because so much of the architecture is dependent on the static type safety and static verifiability of those languages. One thing to keep in mind: the design space for OSs implemented in C is pretty much fully explored. There's literally thousands of them. In other languages, however, you might actually discover something that nobody has discovered before! There's only about a dozen or so OSs written in Java, about half a dozen in C#, something on the order of two OSs in Haskell, only one in Python and none in Ruby or JavaScript. Try writing an OS in Erlang or Io, and see how that influences your thinking about Operating Systems! A: For my final year project in collage I developed a small x86 OS with a virtual memory manager, a virtual file system and fully preemptive multitasking. I made it open source and the code is heavily commented, check out its source forge page at: https://github.com/stephenfewer/NoNameOS From my experience I can recommend the following: You will need x86 assembly language for various parts, this in unavoidable, but can be kept to a minimum. Fairly quickly you will get running C code, which is a proven choice for OS development. Once you have some sort of memory manager available you can go into C++ if you like (you need some kind of memory manager for things like new and delete). No matter what language you choose you will still need assembly & C to bring a system from boot where the BIOS leaves you into any useable form. Ultimately, the primary language you choose will depend on the type of OS you want to develop. My development environment was the Windows port of the GNU development tools DJGPP along with the NASM assembler. For my IDE I used IBM's Eclipse with the CDT plugin which provides a C/C++ development environment within Eclipse. For testing I recommend BOCHS, an open source x86 PC emulator. It lets you boot up your OS quickly which is great for testing and can be integrated into eclipse so you can build and run your OS at the push of a button. I would also recommend using both VMWare and a physical PC occasionally as you can pick up on some subtle bugs that way. P.S. OS development is really fun but is very intensive, mine took the best part of 12 months. My advice is to plan well and your design is key! enjoy :) A: There is an OS course offered at the University of Maryland that utilizes GeekOS. This is a small, extensively commented OS designed for educational purposes which can be run using the Bochs or QEMU emulators. For an example of how it is used in a course, check out a previous offering of the course at the class webpage. There, you will find assignments where you have to add different functionality to GeekOS. Its a great way to get familiar with a small and simple OS that runs on the x86 architecture. A: You might want to look up XINU. it's a tiny OS for x86 that isn't really used for anything other than to be dissected by students. A: Use ANSI C, and start off with an emulator. When you port over to a real machine, there will be some assembler code. Context switching and interrupt handling (for instance) is easier to write in assembler. Andy Tannenbaum has written a good book on OS. Many other good ones exist. Good luck! There is nothing quite like haveing written your own OS, however small. A: Also check out the OSDev.org which have all information you need to get started. A: I've done that once for a 386SX, which was on a PCI board. A good source on how to start a X86 cpu in protected mode is the source code of linux. It's just a few assembly statements. After that you can use gcc to compile your C code. The result is objectcode in ELF format. I wrote my own linker, to make a program out of the objectcode. And yes, it worked! Good luck. A: Be sure to check out the answers to my question: How to get started in operating system development A: Without a doubt, I'd use Ada. It's the best general-purpose systems-programming language I have come across, bar none. One example, Ada's much better for specifying bit layout of objects in a record than C. Ada also supports overlaying records on specific memory locations. C requires you to play with pointers to acheive the same effect. That works, but is more error-prone. Ada also has language support for interrupts. Another: Safety. Ada defaults to bound checking array assignments, but allows you to turn it off when you need it. C "defaults" to no bound checking on arrays,so you have to do it yourself manually whenever you want it. Time has shown that this is not the right default. You will forget one where it is needed. Buffer overflow exploits are the single most common security flaw used by crackers. They have whole websites explainng how to find and use them. As for learning about doing this, the two books I'm aware of are XINU (Unix backwards, nothing to do with Scientology), and Project Oberon. The first was used in my Operating Systems graduate course, and the second was written by Nikalus Wirth, creator of Pascal. A: If you are making a full OS, you will need to use a range of languages. I would expect Assembly, C and C++ at the very least. I would use a Virtual Machine for most of the testing. A: C most probably...all major OS-es have been written in C/C++ or Objective-C(Apple) A: If you want write an OS then you need a couple of people. A OS can not write a single people. I think it is better to work on existing OS projects * *Reactos --> C, Assembler *SharpOS --> C# *JNode --> Java This is only a short list of OS projects. How you can see there is a project for every possible language.
{ "language": "en", "url": "https://stackoverflow.com/questions/130065", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: Is there an inverse function for time.gmtime() that parses a UTC tuple to seconds since the epoch? python's time module seems a little haphazard. For example, here is a list of methods in there, from the docstring: time() -- return current time in seconds since the Epoch as a float clock() -- return CPU time since process start as a float sleep() -- delay for a number of seconds given as a float gmtime() -- convert seconds since Epoch to UTC tuple localtime() -- convert seconds since Epoch to local time tuple asctime() -- convert time tuple to string ctime() -- convert time in seconds to string mktime() -- convert local time tuple to seconds since Epoch strftime() -- convert time tuple to string according to format specification strptime() -- parse string to time tuple according to format specification tzset() -- change the local timezone Looking at localtime() and its inverse mktime(), why is there no inverse for gmtime() ? Bonus questions: what would you name the method ? How would you implement it ? A: I always thought the time and datetime modules were a little incoherent. Anyways, here's the inverse of mktime import time def mkgmtime(t): """Convert UTC tuple to seconds since Epoch""" return time.mktime(t)-time.timezone A: There is actually an inverse function, but for some bizarre reason, it's in the calendar module: calendar.timegm(). I listed the functions in this answer. A: I'm only a newbie to Python, but here's my approach. def mkgmtime(fields): now = int(time.time()) gmt = list(time.gmtime(now)) gmt[8] = time.localtime(now).tm_isdst disp = now - time.mktime(tuple(gmt)) return disp + time.mktime(fields) There, my proposed name for the function too. :-) It's important to recalculate disp every time, in case the daylight-savings value changes or the like. (The conversion back to tuple is required for Jython. CPython doesn't seem to require it.) This is super ick, because time.gmtime sets the DST flag to false, always. I hate the code, though. There's got to be a better way to do it. And there are probably some corner cases that I haven't got, yet. A: mktime documentation is a bit misleading here, there is no meaning saying it's calculated as a local time, rather it's calculating the seconds from Epoch according to the supplied tuple - regardless of your computer locality. If you do want to do a conversion from a utc_tuple to local time you can do the following: >>> time.ctime(time.time()) 'Fri Sep 13 12:40:08 2013' >>> utc_tuple = time.gmtime() >>> time.ctime(time.mktime(utc_tuple)) 'Fri Sep 13 10:40:11 2013' >>> time.ctime(time.mktime(utc_tuple) - time.timezone) 'Fri Sep 13 12:40:11 2013' Perhaps a more accurate question would be how to convert a utc_tuple to a local_tuple. I would call it gm_tuple_to_local_tuple (I prefer long and descriptive names): >>> time.localtime(time.mktime(utc_tuple) - time.timezone) time.struct_time(tm_year=2013, tm_mon=9, tm_mday=13, tm_hour=12, tm_min=40, tm_sec=11, tm_wday=4, tm_yday=256, tm_isdst=1) Validatation: >>> time.ctime(time.mktime(time.localtime(time.mktime(utc_tuple) - time.timezone))) 'Fri Sep 13 12:40:11 2013' Hope this helps, ilia.
{ "language": "en", "url": "https://stackoverflow.com/questions/130074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: CouchDB Document Model Changes? Rails uses the concept of migrations to deal with model changes using the ActiveRecord API. CouchDB uses JSON (nested maps and arrays) to represent its model objects. In working with CouchDB so far, I don't see good ways of recognizing when the document's structure has changed (other than being disciplined as a developer), or for migrating documents from an old to a new model. Are there existing features or do you have best practices for handling model changes in CouchDB? A: Time for RDBMS de-brainwashing. :) One of the biggest points of couchdb's schema-less design is directly aimed at preventing the need for migrations. The JSON representation of objects makes it easy to just duck type your objects. For example, given that you have a blog type web app with posts and whatever fancy things people store in a blog. Your post documents have fields like author, title, created at, etc. Now you come along and think to yourself, "I should track what phase the moon is in when I publish my posts..." you can just start adding moon_phase as an attribute to new posts. If you want to be complete you'd go back and add moon_phase to old posts, but that's not strictly necessary. In your views, you can access moon_phase as an attribute. And it'll be null or cause an exception or something. (Not a JS expert, I think null is the right answer) Thing is, it doesn't really matter. If you feel like changing something just change it. Though make sure your views understand that change. Which in my experience doesn't really require much. Also, if you're really paranoid, you might store a version/type attribute, as in: { _id: "foo-post", _rev: "23490AD", type: "post", typevers: 0, moon_phase: "full" } Hope that helps. A: Check out ActiveCouch. CouchDB is schema-less on purpose, so there is not a 1-to-1 mapping of concepts from the ActiveRecord migrations to a CouchDB equivalent. However, ActiveCouch does include migrations for CouchDB's 'views'. A: If you're into having schemas and still want to use CouchDB you get an "impedance mismatch". Nevertheless, having "migrations" is not that hard. Add a schema_version element to each document. Then have your "document reading function" include updating. Something like this: def read(doc_id): doc = db.get(doc_id) if doc.schema_version == 1: # version 1 had names broken down too much doc.name = "%s %s" % (doc.first, doc.last) del doc.first del doc.last doc.schema_version = 2 db.put(doc) if doc.schema_version == 2: weight # version 2 used kg instead of g doc.weight_g = doc.weight_kg * 1000 del doc.volume_kg doc.schema_version = 3 db.put(doc) return doc If you want to upgrade the whole DB at once just call read(doc_id) for every document.
{ "language": "en", "url": "https://stackoverflow.com/questions/130092", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Command line .cmd/.bat script, how to get directory of running script How can you get the directory of the script that was run and use it within the .cmd file? A: Raymond Chen has a few ideas: https://devblogs.microsoft.com/oldnewthing/20050128-00/?p=36573 Quoted here in full because MSDN archives tend to be somewhat unreliable: The easy way is to use the %CD% pseudo-variable. It expands to the current working directory. set OLDDIR=%CD% .. do stuff .. chdir /d %OLDDIR% &rem restore current directory (Of course, directory save/restore could more easily have been done with pushd/popd, but that's not the point here.) The %CD% trick is handy even from the command line. For example, I often find myself in a directory where there's a file that I want to operate on but... oh, I need to chdir to some other directory in order to perform that operation. set _=%CD%\curfile.txt cd ... some other directory ... somecommand args %_% args (I like to use %_% as my scratch environment variable.) Type SET /? to see the other pseudo-variables provided by the command processor. Also the comments in the article are well worth scanning for example this one (via the WayBack Machine, since comments are gone from older articles): http://blogs.msdn.com/oldnewthing/archive/2005/01/28/362565.aspx#362741 This covers the use of %~dp0: If you want to know where the batch file lives: %~dp0 %0 is the name of the batch file. ~dp gives you the drive and path of the specified argument. A: This is equivalent to the path of the script: %~dp0 This uses the batch parameter extension syntax. Parameter 0 is always the script itself. If your script is stored at C:\example\script.bat, then %~dp0 evaluates to C:\example\. ss64.com has more information about the parameter extension syntax. Here is the relevant excerpt: You can get the value of any parameter using a % followed by it's numerical position on the command line. [...] When a parameter is used to supply a filename then the following extended syntax can be applied: [...] %~d1 Expand %1 to a Drive letter only - C: [...] %~p1 Expand %1 to a Path only e.g. \utils\ this includes a trailing \ which may be interpreted as an escape character by some commands. [...] The modifiers above can be combined: %~dp1 Expand %1 to a drive letter and path only [...] You can get the pathname of the batch script itself with %0, parameter extensions can be applied to this so %~dp0 will return the Drive and Path to the batch script e.g. W:\scripts\ A: This answer will also work if the batch file is invoked without an explicit path! First the script determines if the batch file was called with a path. If that's the case that path is used. If not, the %path% is searched to find the batch file. @echo off setlocal enableextensions enabledelayedexpansion for /f %%i in ('cd') do set CURDIR=%%i set LAUNCHERPATH=%~dp0 if "%LAUNCHERPATH%" neq "%CURDIR%\" goto LAUNCHERPATHOK set LIST=%PATH% :ProcessList for /f "tokens=1* delims=;" %%a in ("!LIST!") do ( if "%%a" neq "" ( set x=%%a IF EXIST "%%a%0.bat" GOTO FOUND1 IF EXIST "%%a\%0.bat" GOTO FOUND0 IF EXIST "%%a%0" GOTO FOUND1 IF EXIST "%%a\%0" GOTO FOUND0 ) if "%%b" NEQ "" ( set List=%%b goto :ProcessList ) ) exit 1 :FOUND0 set x=%x%\ :FOUND1 set LAUNCHERPATH=%x% :LAUNCHERPATHOK echo %LAUNCHERPATH% Kudos also to dos batch iterate through a delimited string for parsing the path variable A: for /F "eol= delims=~" %%d in ('CD') do set curdir=%%d pushd %curdir% Source
{ "language": "en", "url": "https://stackoverflow.com/questions/130112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "86" }
Q: Windows batch command(s) to read first line from text file How can I read the first line from a text file using a Windows batch file? Since the file is large I only want to deal with the first line. A: Thanks to thetalkingwalnut with answer Windows batch command(s) to read first line from text file I came up with the following solution: @echo off for /f "delims=" %%a in ('type sample.txt') do ( echo %%a exit /b ) A: Here's a general-purpose batch file to print the top n lines from a file like the GNU head utility, instead of just a single line. @echo off if [%1] == [] goto usage if [%2] == [] goto usage call :print_head %1 %2 goto :eof REM REM print_head REM Prints the first non-blank %1 lines in the file %2. REM :print_head setlocal EnableDelayedExpansion set /a counter=0 for /f ^"usebackq^ eol^=^ ^ delims^=^" %%a in (%2) do ( if "!counter!"=="%1" goto :eof echo %%a set /a counter+=1 ) goto :eof :usage echo Usage: head.bat COUNT FILENAME For example: Z:\>head 1 "test file.c" ; this is line 1 Z:\>head 3 "test file.c" ; this is line 1 this is line 2 line 3 right here It does not currently count blank lines. It is also subject to the batch-file line-length restriction of 8 KB. A: Slightly building upon the answers of other people. Now allowing you to specify the file you want to read from and the variable you want the result put into: @echo off for /f "delims=" %%x in (%2) do ( set %1=%%x exit /b ) This means you can use the above like this (assuming you called it getline.bat) c:\> dir > test-file c:\> getline variable test-file c:\> set variable variable= Volume in drive C has no label. A: powershell Get-Content file.txt -Head 1 This one is much quicker than the other powershell examples above, where the full file is read. A: One liner, useful for stdout redirect with ">": @for /f %%i in ('type yourfile.txt') do @echo %%i & exit A: Try this @echo off setlocal enableextensions enabledelayedexpansion set firstLine=1 for /f "delims=" %%i in (yourfilename.txt) do ( if !firstLine!==1 echo %%i set firstLine=0 ) endlocal A: uh? imo this is much simpler set /p texte=< file.txt echo %texte% A: Uh you guys... C:\>findstr /n . c:\boot.ini | findstr ^1: 1:[boot loader] C:\>findstr /n . c:\boot.ini | findstr ^3: 3:default=multi(0)disk(0)rdisk(0)partition(1)\WINNT C:\> A: Here is a workaround using powershell: powershell (Get-Content file.txt)[0] (You can easily read also a range of lines with powershell (Get-Content file.txt)[0..3]) If you need to set a variable inside a batch script as the first line of file.txt you may use: for /f "usebackq delims=" %%a in (`powershell ^(Get-Content file.txt^)[0]`) do (set "head=%%a") To test it create a text file test.txt with at least a couple of lines and in the same folder run the following batch file (give to the file the .bat extension): @echo off for /f "usebackq delims=" %%a in (`powershell ^(Get-Content test.txt^)[0]`) do (set "head=%%a") echo Hello echo %head% echo End pause In the command prompt window that will open, provided that the content of first line of test.txt is line 1, you will see Hello line 1 End Press any key to continue . . . A: To cicle a file (file1.txt, file1[1].txt, file1[2].txt, etc.): START/WAIT C:\LAERCIO\DELPHI\CICLADOR\dprCiclador.exe C:\LAERCIUM\Ciclavel.txt rem set/p ciclo=< C:\LAERCIUM\Ciclavel.txt: set/p ciclo=< C:\LAERCIUM\Ciclavel.txt rem echo %ciclo%: echo %ciclo% And it's running. A: You might give this a try: @echo off for /f %%a in (sample.txt) do ( echo %%a exit /b ) edit Or, say you have four columns of data and want from the 5th row down to the bottom, try this: @echo off for /f "skip=4 tokens=1-4" %%a in (junkl.txt) do ( echo %%a %%b %%c %%d ) A: The problem with the EXIT /B solutions, when more realistically inside a batch file as just one part of it is the following. There is no subsequent processing within the said batch file after the EXIT /B. Usually there is much more to batches than just the one, limited task. To counter that problem: @echo off & setlocal enableextensions enabledelayedexpansion set myfile_=C:\_D\TEST\My test file.txt set FirstLine= for /f "delims=" %%i in ('type "%myfile_%"') do ( if not defined FirstLine set FirstLine=%%i) echo FirstLine=%FirstLine% endlocal & goto :EOF (However, the so-called poison characters will still be a problem.) More on the subject of getting a particular line with batch commands: How do I get the n'th, the first and the last line of a text file?" http://www.netikka.net/tsneti/info/tscmd023.htm [Added 28-Aug-2012] One can also have: @echo off & setlocal enableextensions set myfile_=C:\_D\TEST\My test file.txt for /f "tokens=* delims=" %%a in ( 'type "%myfile_%"') do ( set FirstLine=%%a& goto _ExitForLoop) :_ExitForLoop echo FirstLine=%FirstLine% endlocal & goto :EOF A: Another way setlocal enabledelayedexpansion @echo off for /f "delims=" %%i in (filename.txt) do ( if 1==1 ( set first_line=%%i echo !first_line! goto :eof )) A: Note, the batch file approaches will be limited to the line limit for the DOS command processor - see What is the command line length limit?. So if trying to process a file that has any lines more that 8192 characters the script will just skip them as the value can't be held. A: In Windows PowerShell below cmd can be used to get the first line and replace it with a static value powershell -Command "(gc txt1.txt) -replace (gc txt1.txt)[0], 'This is the first line' | Out-File -encoding ASCII txt1.txt" Reference How can you find and replace text in a file using the Windows command-line environment? A: Print 1st line only (no need to read entire file): set /p a=< file.txt & echo !a! To print one line at a time; user to press a key for next line: (After printing required lines, press Ctrl+C to stop.) for /f "delims=" %a in (downing.txt) do echo %a & pause>nul To print 1st n lines (without user intervention): type nul > tmp & fc tmp "%file%" /lb %n% /t | find /v "?" | more +2 Tested on Win 10 CMD.
{ "language": "en", "url": "https://stackoverflow.com/questions/130116", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "98" }
Q: If you shouldn't throw exceptions in a destructor, how do you handle errors in it? Most people say never throw an exception out of a destructor - doing so results in undefined behavior. Stroustrup makes the point that "the vector destructor explicitly invokes the destructor for every element. This implies that if an element destructor throws, the vector destruction fails... There is really no good way to protect against exceptions thrown from destructors, so the library makes no guarantees if an element destructor throws" (from Appendix E3.2). This article seems to say otherwise - that throwing destructors are more or less okay. So my question is this - if throwing from a destructor results in undefined behavior, how do you handle errors that occur during a destructor? If an error occurs during a cleanup operation, do you just ignore it? If it is an error that can potentially be handled up the stack but not right in the destructor, doesn't it make sense to throw an exception out of the destructor? Obviously these kinds of errors are rare, but possible. A: Your destructor might be executing inside a chain of other destructors. Throwing an exception that is not caught by your immediate caller can leave multiple objects in an inconsistent state, thus causing even more problems then ignoring the error in the cleanup operation. A: Throwing out of a destructor can result in a crash, because this destructor might be called as part of "Stack unwinding". Stack unwinding is a procedure which takes place when an exception is thrown. In this procedure, all the objects that were pushed into the stack since the "try" and until the exception was thrown, will be terminated -> their destructors will be called. And during this procedure, another exception throw is not allowed, because it's not possible to handle two exceptions at a time, thus, this will provoke a call to abort(), the program will crash and the control will return to the OS. A: We have to differentiate here instead of blindly following general advice for specific cases. Note that the following ignores the issue of containers of objects and what to do in the face of multiple d'tors of objects inside containers. (And it can be ignored partially, as some objects are just no good fit to put into a container.) The whole problem becomes easier to think about when we split classes in two types. A class dtor can have two different responsibilities: * *(R) release semantics (aka free that memory) *(C) commit semantics (aka flush file to disk) If we view the question this way, then I think that it can be argued that (R) semantics should never cause an exception from a dtor as there is a) nothing we can do about it and b) many free-resource operations do not even provide for error checking, e.g. void free(void* p);. Objects with (C) semantics, like a file object that needs to successfully flush it's data or a ("scope guarded") database connection that does a commit in the dtor are of a different kind: We can do something about the error (on the application level) and we really should not continue as if nothing happened. If we follow the RAII route and allow for objects that have (C) semantics in their d'tors I think we then also have to allow for the odd case where such d'tors can throw. It follows that you should not put such objects into containers and it also follows that the program can still terminate() if a commit-dtor throws while another exception is active. With regard to error handling (Commit / Rollback semantics) and exceptions, there is a good talk by one Andrei Alexandrescu: Error Handling in C++ / Declarative Control Flow (held at NDC 2014) In the details, he explains how the Folly library implements an UncaughtExceptionCounter for their ScopeGuard tooling. (I should note that others also had similar ideas.) While the talk doesn't focus on throwing from a d'tor, it shows a tool that can be used today to get rid of the problems with when to throw from a d'tor. In the future, there may be a std feature for this, see N3614, and a discussion about it. Upd '17: The C++17 std feature for this is std::uncaught_exceptions afaikt. I'll quickly quote the cppref article: Notes An example where int-returning uncaught_exceptions is used is ... ... first creates a guard object and records the number of uncaught exceptions in its constructor. The output is performed by the guard object's destructor unless foo() throws (in which case the number of uncaught exceptions in the destructor is greater than what the constructor observed) A: Everyone else has explained why throwing destructors are terrible... what can you do about it? If you're doing an operation that may fail, create a separate public method that performs cleanup and can throw arbitrary exceptions. In most cases, users will ignore that. If users want to monitor the success/failure of the cleanup, they can simply call the explicit cleanup routine. For example: class TempFile { public: TempFile(); // throws if the file couldn't be created ~TempFile() throw(); // does nothing if close() was already called; never throws void close(); // throws if the file couldn't be deleted (e.g. file is open by another process) // the rest of the class omitted... }; A: As an addition to the main answers, which are good, comprehensive and accurate, I would like to comment about the article you reference - the one that says "throwing exceptions in destructors is not so bad". The article takes the line "what are the alternatives to throwing exceptions", and lists some problems with each of the alternatives. Having done so it concludes that because we can't find a problem-free alternative we should keep throwing exceptions. The trouble is is that none of the problems it lists with the alternatives are anywhere near as bad as the exception behaviour, which, let's remember, is "undefined behaviour of your program". Some of the author's objections include "aesthetically ugly" and "encourage bad style". Now which would you rather have? A program with bad style, or one which exhibited undefined behaviour? A: Throwing an exception out of a destructor is dangerous. If another exception is already propagating the application will terminate. #include <iostream> class Bad { public: // Added the noexcept(false) so the code keeps its original meaning. // Post C++11 destructors are by default `noexcept(true)` and // this will (by default) call terminate if an exception is // escapes the destructor. // // But this example is designed to show that terminate is called // if two exceptions are propagating at the same time. ~Bad() noexcept(false) { throw 1; } }; class Bad2 { public: ~Bad2() { throw 1; } }; int main(int argc, char* argv[]) { try { Bad bad; } catch(...) { std::cout << "Print This\n"; } try { if (argc > 3) { Bad bad; // This destructor will throw an exception that escapes (see above) throw 2; // But having two exceptions propagating at the // same time causes terminate to be called. } else { Bad2 bad; // The exception in this destructor will // cause terminate to be called. } } catch(...) { std::cout << "Never print this\n"; } } This basically boils down to: Anything dangerous (i.e. that could throw an exception) should be done via public methods (not necessarily directly). The user of your class can then potentially handle these situations by using the public methods and catching any potential exceptions. The destructor will then finish off the object by calling these methods (if the user did not do so explicitly), but any exceptions throw are caught and dropped (after attempting to fix the problem). So in effect you pass the responsibility onto the user. If the user is in a position to correct exceptions they will manually call the appropriate functions and processes any errors. If the user of the object is not worried (as the object will be destroyed) then the destructor is left to take care of business. An example: std::fstream The close() method can potentially throw an exception. The destructor calls close() if the file has been opened but makes sure that any exceptions do not propagate out of the destructor. So if the user of a file object wants to do special handling for problems associated to closing the file they will manually call close() and handle any exceptions. If on the other hand they do not care then the destructor will be left to handle the situation. Scott Myers has an excellent article about the subject in his book "Effective C++" Edit: Apparently also in "More Effective C++" Item 11: Prevent exceptions from leaving destructors A: The real question to ask yourself about throwing from a destructor is "What can the caller do with this?" Is there actually anything useful you can do with the exception, that would offset the dangers created by throwing from a destructor? If I destroy a Foo object, and the Foo destructor tosses out an exception, what I can reasonably do with it? I can log it, or I can ignore it. That's all. I can't "fix" it, because the Foo object is already gone. Best case, I log the exception and continue as if nothing happened (or terminate the program). Is that really worth potentially causing undefined behavior by throwing from a destructor? A: Q: So my question is this - if throwing from a destructor results in undefined behavior, how do you handle errors that occur during a destructor? A: There are several options: * *Let the exceptions flow out of your destructor, regardless of what's going on elsewhere. And in doing so be aware (or even fearful) that std::terminate may follow. *Never let exception flow out of your destructor. May be write to a log, some big red bad text if you can. *my fave : If std::uncaught_exception returns false, let you exceptions flow out. If it returns true, then fall back to the logging approach. But is it good to throw in d'tors? I agree with most of the above that throwing is best avoided in destructor, where it can be. But sometimes you're best off accepting it can happen, and handle it well. I'd choose 3 above. There are a few odd cases where its actually a great idea to throw from a destructor. Like the "must check" error code. This is a value type which is returned from a function. If the caller reads/checks the contained error code, the returned value destructs silently. But, if the returned error code has not been read by the time the return values goes out of scope, it will throw some exception, from its destructor. A: So my question is this - if throwing from a destructor results in undefined behavior, how do you handle errors that occur during a destructor? The main problem is this: you can't fail to fail. What does it mean to fail to fail, after all? If committing a transaction to a database fails, and it fails to fail (fails to rollback), what happens to the integrity of our data? Since destructors are invoked for both normal and exceptional (fail) paths, they themselves cannot fail or else we're "failing to fail". This is a conceptually difficult problem but often the solution is to just find a way to make sure that failing cannot fail. For example, a database might write changes prior to committing to an external data structure or file. If the transaction fails, then the file/data structure can be tossed away. All it has to then ensure is that committing the changes from that external structure/file an atomic transaction that can't fail. The pragmatic solution is perhaps just make sure that the chances of failing on failure are astronomically improbable, since making things impossible to fail to fail can be almost impossible in some cases. The most proper solution to me is to write your non-cleanup logic in a way such that the cleanup logic can't fail. For example, if you're tempted to create a new data structure in order to clean up an existing data structure, then perhaps you might seek to create that auxiliary structure in advance so that we no longer have to create it inside a destructor. This is all much easier said than done, admittedly, but it's the only really proper way I see to go about it. Sometimes I think there should be an ability to write separate destructor logic for normal execution paths away from exceptional ones, since sometimes destructors feel a little bit like they have double the responsibilities by trying to handle both (an example is scope guards which require explicit dismissal; they wouldn't require this if they could differentiate exceptional destruction paths from non-exceptional ones). Still the ultimate problem is that we can't fail to fail, and it's a hard conceptual design problem to solve perfectly in all cases. It does get easier if you don't get too wrapped up in complex control structures with tons of teeny objects interacting with each other, and instead model your designs in a slightly bulkier fashion (example: particle system with a destructor to destroy the entire particle system, not a separate non-trivial destructor per particle). When you model your designs at this kind of coarser level, you have less non-trivial destructors to deal with, and can also often afford whatever memory/processing overhead is required to make sure your destructors cannot fail. And that's one of the easiest solutions naturally is to use destructors less often. In the particle example above, perhaps upon destroying/removing a particle, some things should be done that could fail for whatever reason. In that case, instead of invoking such logic through the particle's dtor which could be executed in an exceptional path, you could instead have it all done by the particle system when it removes a particle. Removing a particle might always be done during a non-exceptional path. If the system is destroyed, maybe it can just purge all particles and not bother with that individual particle removal logic which can fail, while the logic that can fail is only executed during the particle system's normal execution when it's removing one or more particles. There are often solutions like that which crop up if you avoid dealing with lots of teeny objects with non-trivial destructors. Where you can get tangled up in a mess where it seems almost impossible to be exception-safety is when you do get tangled up in lots of teeny objects that all have non-trivial dtors. It would help a lot if nothrow/noexcept actually translated into a compiler error if anything which specifies it (including virtual functions which should inherit the noexcept specification of its base class) attempted to invoke anything that could throw. This way we'd be able to catch all this stuff at compile-time if we actually write a destructor inadvertently which could throw. A: Its dangerous, but it also doesn't make sense from a readability/code understandability standpoint. What you have to ask is in this situation int foo() { Object o; // As foo exits, o's destructor is called } What should catch the exception? Should the caller of foo? Or should foo handle it? Why should the caller of foo care about some object internal to foo? There might be a way the language defines this to make sense, but its going to be unreadable and difficult to understand. More importantly, where does the memory for Object go? Where does the memory the object owned go? Is it still allocated (ostensibly because the destructor failed)? Consider also the object was in stack space, so its obviously gone regardless. Then consider this case class Object { Object2 obj2; Object3* obj3; virtual ~Object() { // What should happen when this fails? How would I actually destroy this? delete obj3; // obj 2 fails to destruct when it goes out of scope, now what!?!? // should the exception propogate? } }; When the delete of obj3 fails, how do I actually delete in a way that is guaranteed to not fail? Its my memory dammit! Now consider in the first code snippet Object goes away automatically because its on the stack while Object3 is on the heap. Since the pointer to Object3 is gone, you're kind of SOL. You have a memory leak. Now one safe way to do things is the following class Socket { virtual ~Socket() { try { Close(); } catch (...) { // Why did close fail? make sure it *really* does close here } } }; Also see this FAQ A: From the ISO draft for C++ (ISO/IEC JTC 1/SC 22 N 4411) So destructors should generally catch exceptions and not let them propagate out of the destructor. 3 The process of calling destructors for automatic objects constructed on the path from a try block to a throw- expression is called “stack unwinding.” [ Note: If a destructor called during stack unwinding exits with an exception, std::terminate is called (15.5.1). So destructors should generally catch exceptions and not let them propagate out of the destructor. — end note ] A: I am in the group that considers that the "scoped guard" pattern throwing in the destructor is useful in many situations - particularly for unit tests. However, be aware that in C++11, throwing in a destructor results in a call to std::terminate since destructors are implicitly annotated with noexcept. Andrzej Krzemieński has a great post on the topic of destructors that throw: * *https://akrzemi1.wordpress.com/2011/09/21/destructors-that-throw/ He points out that C++11 has a mechanism to override the default noexcept for destructors: In C++11, a destructor is implicitly specified as noexcept. Even if you add no specification and define your destructor like this: class MyType { public: ~MyType() { throw Exception(); } // ... }; The compiler will still invisibly add specification noexcept to your destructor. And this means that the moment your destructor throws an exception, std::terminate will be called, even if there was no double-exception situation. If you are really determined to allow your destructors to throw, you will have to specify this explicitly; you have three options: * *Explicitly specify your destructor as noexcept(false), *Inherit your class from another one that already specifies its destructor as noexcept(false). *Put a non-static data member in your class that already specifies its destructor as noexcept(false). Finally, if you do decide to throw in the destructor, you should always be aware of the risk of a double-exception (throwing while the stack is being unwind because of an exception). This would cause a call to std::terminate and it is rarely what you want. To avoid this behaviour, you can simply check if there is already an exception before throwing a new one using std::uncaught_exception(). A: Throwing an exception out of a destructor never causes undefined behaviour. The problem of throwing exceptions out a destructor is that destructors of successfully created objects which scopes are leaving while handling an uncaught exception (it is after an exception object is created and until completion of a handler of the exception activation), are called by exception handling mechanism; and, If such additional exception from the destructor called while processing the uncaught exception interrupts handling the uncaught exception, it will cause calling std::terminate (the other case when std::exception is called is that an exception is not handled by any handler but this is as for any other function, regardless of whether or not it was a destructor). If handling an uncaught exception in progress, your code never knows whether the additional exception will be caught or will archive an uncaught exception handling mechanism, so never know definitely whether it is safe to throw or not. Though, it is possible to know that handling an uncaught exception is in progress ( https://en.cppreference.com/w/cpp/error/uncaught_exception), so you are able to overkill by checking the condition and throw only if it is not the case (it will not throw in some cases when it would be safe). But in practice such separating into two possible behaviours is not useful - it just does not help you to make a well-designed program. If you throw out of destructors ignoring whether or not an uncaught exception handling is in progress, in order to avoid possible calling std::terminate, you must guarantee that all exceptions thrown during lifetime of an object that may throw an exception from their destructor are caught before beginning of destruction of the object. It is quite limited usage; you hardly can use all classes which would be reasonably allowed to throw out of their destructor in this way; and a combination of allowing such exceptions only for some classes with such restricted usage of these classes impede making a well-designed program, too. A: Set an alarm event. Typically alarm events are better form of notifying failure while cleaning up objects A: Unlike constructors, where throwing exceptions can be a useful way to indicate that object creation succeeded, exceptions should not be thrown in destructors. The problem occurs when an exception is thrown from a destructor during the stack unwinding process. If that happens, the compiler is put in a situation where it doesn’t know whether to continue the stack unwinding process or handle the new exception. The end result is that your program will be terminated immediately. Consequently, the best course of action is just to abstain from using exceptions in destructors altogether. Write a message to a log file instead. A: I currently follow the policy (that so many are saying) that classes shouldn't actively throw exceptions from their destructors but should instead provide a public "close" method to perform the operation that could fail... ...but I do believe destructors for container-type classes, like a vector, should not mask exceptions thrown from classes they contain. In this case, I actually use a "free/close" method that calls itself recursively. Yes, I said recursively. There's a method to this madness. Exception propagation relies on there being a stack: If a single exception occurs, then both the remaining destructors will still run and the pending exception will propagate once the routine returns, which is great. If multiple exceptions occur, then (depending on the compiler) either that first exception will propagate or the program will terminate, which is okay. If so many exceptions occur that the recursion overflows the stack then something is seriously wrong, and someone's going to find out about it, which is also okay. Personally, I err on the side of errors blowing up rather than being hidden, secret, and insidious. The point is that the container remains neutral, and it's up to the contained classes to decide whether they behave or misbehave with regard to throwing exceptions from their destructors. A: Martin Ba (above) is on the right track- you architect differently for RELEASE and COMMIT logic. For Release: You should eat any errors. You're freeing memory, closing connections, etc. Nobody else in the system should ever SEE those things again, and you're handing back resources to the OS. If it looks like you need real error handling here, its likely a consequence of design flaws in your object model. For Commit: This is where you want the same kind of RAII wrapper objects that things like std::lock_guard are providing for mutexes. With those you don't put the commit logic in the dtor AT ALL. You have a dedicated API for it, then wrapper objects that will RAII commit it in THEIR dtors and handle the errors there. Remember, you can CATCH exceptions in a destructor just fine; its issuing them that's deadly. This also lets you implement policy and different error handling just by building a different wrapper (e.g. std::unique_lock vs. std::lock_guard), and ensures you won't forget to call the commit logic- which is the only half-way decent justification for putting it in a dtor in the 1st place.
{ "language": "en", "url": "https://stackoverflow.com/questions/130117", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "302" }
Q: Starteam 2005 COM API Has anyone worked with the StarTeam COM API (Specifically, intergrating with C#). I need to write a helper function that returns a directory structure out of Starteam, but all I've been able to retrieve using this API has been a list of views. Has anyone else tried this? A: Oh, in the interests of completeness, if you don't want to write the recursive code to navigate the heirachy of folders yourself, there is a helper class you can use to do the hard work for you called FolderListManager void BtnFindClick(object sender, EventArgs e) { Borland.StarTeam.View v = StarTeamFinder.OpenView("username:pwd@server:49201/Project"); FolderListManager lm = new FolderListManager(v); lm.IncludeFolders(v.RootFolder,-1); // -1 means recursively add child folders StringBuilder sb = new StringBuilder(); foreach(Folder f in lm.Folders) { sb.AppendLine(f.Path); } txtResults.Text = sb.ToString(); } A: the Starteam object model is heirachical, projects contain views, views contain folders, folders contain items (child folders, files, cr's etc) So once you have your view list you can get the folders that belong to the view, then you have a few properties that determine how they map to the local file system, both the view object and the folder objects have a readonly path property. There are 4 other properties of interest though, on the view object read up on the DefaultPath and AlternatePath properties and on the folder object the DefaultPathFragment and AlternatePathFragment. A: You don't have to use COM to access the StarTeam API. There's a .NET version of the StarTeam SDK available.
{ "language": "en", "url": "https://stackoverflow.com/questions/130120", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Windows Forms Threading and Events - most efficient way to hand off events? My form receives asynchronous callbacks from another object on random worker threads. I have been passing the data to the main thread (where it can be used to update onscreen controls) using delegates as shown below. Performance is dreadful -- once I reach 500 updates per second, the program completely locks up. My GUI processing itself is not the problem, as I can simulate this level of updating within the form and have no problems. Is there a more efficient mechanism I should be using to hand off the data from thread to thread? delegate void DStatus( MyStatus obj ); DStatus _status; // set to MainThreadOnStatus during construction // this function only called on form's owner thread void MainThreadOnStatus( MyStatus obj ) { // screen updates here as needed } // this function called by arbitrary worker threads in external facility void OnStatus( MyStatus obj ) { this.BeginInvoke( _status, obj ); } A: I’m not a big fan of timers, if you want a more event driven approach, try something like this: public class Foo { private AsyncOperation _asyncOperation = null; private SendOrPostCallback _notifyNewItem = null; //Make sure you call this on your UI thread. //Alternatively you can call something like the AttachUI() below later on and catch-up with //your workers later. public Foo() { this._notifyNewItem = new SendOrPostCallback(this.NewDataInTempList); this._asyncOperation = AsyncOperationManager.CreateOperation(this); } public void AttachUI() { if (this._asyncOperation != null) { this._asyncOperation.OperationCompleted(); this._asyncOperation = null; } this._asyncOperation = AsyncOperationManager.CreateOperation(this); //This is for catching up with the workers if they’ve been busy already if (this._asyncOperation != null) { this._asyncOperation.Post(this._notifyNewItem, null); } } private int _tempCapacity = 500; private object _tempListLock = new object(); private List<MyStatus> _tempList = null; //This gets called on the worker threads.. //Keeps adding to the same list until UI grabs it, then create a new one. public void Add(MyStatus status) { bool notify = false; lock (_tempListLock) { if (this._tempList == null) { this._tempList = new List<MyStatus>(this._tempCapacity); notify = true; } this._tempList.Add(status); } if (notify) { if (this._asyncOperation != null) { this._asyncOperation.Post(this._notifyNewItem, null); } } } //This gets called on your UI thread. private void NewDataInTempList(object o) { List<MyStatus> statusList = null; lock (this._tempListLock) { //Grab the list, and release the lock as soon as possible. statusList = this._tempList; this._tempList = null; } if (statusList != null) { //Deal with it here at your leasure } } } I’ve used this in a custom Log4Net logger collecting log entries and adding them to a circular array that’s bound to a grid. The performance ended up being pretty good for what it does. A: You probably don't need to update UI on every event, but rather "not as often as X times per second". You may utilize StopWatch or other timing system to collect events during a period of time, and then update UI when appropriate. If you need to capture all events, collect them in the Queue and fire event every so often, and that event handler will process the Queue and update UI once for all queued events. A: I've been doing what Ilya has been suggesting. For UIs that don't have to respond "real time" I have a stopwatch that goes twice a second or so. For faster updates I use a queue or other datastructure that stores the event data and then use "lock (queue) { }" to avoid contention. If you don't want to slow down the worker threads you have to make sure that the UI thread doesn't block the workers too long. A: It is difficult to tell the exact problem, but some possibilities... Is your MyStatus object that you are passing to OnStatus derived from MarshalByRefObject (and every object in it)? If not it will be serailized on every call that is marshaled and that can cause a huge performance hit. Also, you should really call this.InvokeRequired before invoking the delegate using the control, but really that is just a best practice.
{ "language": "en", "url": "https://stackoverflow.com/questions/130132", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: IE6 issues with transparent PNGs I've gotten used to the idea that if I want/need to use alpha-trans PNGs in a cross-browser manner, that I use a background image on a div and then, in IE6-only CSS, mark the background as "none" and include the proper "filter" argument. Is there another way? A better way? Is there a way to do this with the img tag and not with background images? A: The bottom line is, if you want alpha transparency in a PNG, and you want it to work in IE6, then you need to have the AlphaImageLoader filter applied. Now, there are numerous ways to do it: Browser specific hacks, Conditional Comments, Javascript/JQuery/JLibraryOfChoice element iteration, Server-Side CSS-serving via UserAgent-sniffing... But all of 'em come down to having the filter applied and the background removed. A: That's most likely the "best" way. But keep in mind that it's not just alpha-trans that IE6 doesn't implement properly when it comes to PNG files; the color space is corrupt due to IE not implementing the gamma properly, and thus PNG files often show "darker" than they should. One alternative "solution" that we implemented on a recent project was to mark every png image with a "toGif" class, in the CSS of which a custom behavior .htc is called which changes the .png extension to .gif if the browser is detected to be one we've marked as a problem. We just include a GIF version of every PNG alongside it in the same path, and if the browser is found to be one that doesn't handle PNGs properly, it swaps it out with a GIF version of the image. We therefore sacrifice the alpha blending in favor of guaranteed full-on transparency and color accuracy, and only do so when we know it's probably not going to look right as-is. It may not be an ideal solution, but it's the nature of cross-browser I suppose. Edit: Actually now that I look at the project in question, we used an .htc behavior for an img class called "alpha" as well which tosses the correct filter on the image automatically. So you're detecting the browser using javascript instead of an IE6-only pure CSS hack, so it might be a little bit more elegant... but it's basically the same thing. For an introduction to how to write DHTML behaviors, try this link. A: Here's a specific solution I like, using Javascript (jQuery): http://jquery.andreaseberhard.de/pngFix/ It's easy to add to an existing site, handles all manner of images (form buttons, backgrounds, regular IMG tags, etc), and leaves your CSS nice and clean. A: The image loader is the only available fix for IE6. Note that it's PNG support is very rudimentary (along with IE7, too), and cannot correctly handle looped transparent backgrounds. I learnt this the hard way when trying to design a website with a transparent container. Worked perfectly in Firefox, of course. The fix should be OK for small areas of background and any transparent foreground graphics, but again I'd advise against designing a website that uses large amounts of transparency with Internet Explorer. In the end my solution was to display a flat colour for IE, but retained the transparency for the other browsers. Didn't hurt the design too much in the end, fortunately. A: Another way around this is to use 2 separate images, e.g a GIF and a transparent PNG, and target your CSS accordingly: /* for IE 6 */ #banner { background:url(../images/banner.gif); } /* for other browsers */ html > #banner { background:url(../images/banner.png); } IE 6 does not understand CSS child selectors so will ignore the rule, whereas browsers that do understand it will give you a nice transparent PNG. The only downside is that you have to have two separate images and the design might not look exactly the same cross-browser but as long as it does not look broken you should be ok. A: Here are 2 options which do not use the AlphaImageLoader filter. * *DD_belatedPNG by Drew Diller: Uses a VML-based solution rather than the AlphaImageLoader filter. *PNGPong: A Flash based solution For me, if sending a matted .gif to IE6 only isn't feasible, I use Fireworks to add an IE6-friendly palette to the .PNG. A: The usual solution for img elements is to change the src to point to a 1x1 pixel transparent GIF and then use the same filter hack.
{ "language": "en", "url": "https://stackoverflow.com/questions/130161", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: ASP.NET not seeing Radio Button value change I have a form with some radio buttons that are disabled by default. When a value gets entered into a text box, the radio buttons are enabled via javascript. The user then selects one of the radio buttons and clicks on a submit button which posts back to the server. When I get back to the server, the radio button that user clicked is not showing as checked. I'll use 'rbSolid' as the radio button I'm focusing on. I handle the 'onclick' event of the radio buttons, but I don't have the function doing anything yet other than firing: Me.rbSolid.Attributes.Add("onclick", "styleLookupChanged(this);") On the client, this enables the radio button when the textbox value is changed: document.getElementById("ctl00_MainLayoutContent_WebPanel4_rbSolid").disabled = false; I then click the radio button then post back via a button, but back on the server this is always false: If Me.rbSolid.Checked Then... If I have the radio button enabled by default, it shows as checked correctly. Thanks for any help! A: This has to do with how ASP.NET postback data. If a control is disabled control.enabled = false when the page is rendered than the values will not be posted back to the server. How I have solved it in the past is to set the disabled flag using attributes tags instead of using the Enabled property. So instead of control.enabled = false, you use control.attributes.add("disabled", "disabled"). A: This isn't working for me. I added: Me.rbSolid.Style.Add("background-color", "red") Me.rbSolid.Style.Add("disabled", "true") and the background style works but the disabled did not. It's still editable when the form renders. Here's the source after load: <span style="font-family:Verdana;font-size:8pt;background-color:red;Disabled:true;"> <input id="ctl00_MainLayoutContent_WebPanel4_rbSolid" type="radio" name="ctl00$MainLayoutContent$WebPanel4$DetailType" value="rbSolid" onclick="styleLookupChanged(this);"/> <label for="ctl00_MainLayoutContent_WebPanel4_rbSolid">Solid</label> </span> A: apparently disabled is an html attribute not a css attribute. if you disable a radio button on the server side in asp.net and then check the rendered html, the radio button is embedded in a span tag with its disabled attribute set to true. You might try targeting the span of the button instead (parent element, it has no id) and setting disabled=false in your javascript to see if that works
{ "language": "en", "url": "https://stackoverflow.com/questions/130165", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: "Clicking" Command Button from other workbook I am trying to write a macro that would "click" a command button that is in another workbook. Is that possible? Without changing any of the code within that other workbook? A: For an ActiveX button in another workbook: Workbooks("OtherBook").Worksheets("Sheet1").CommandButton1.Value = True For an MSForms button in another workbook: Application.Run Workbooks("OtherBook").Worksheets("Sheet1").Shapes("Button 1").OnAction A: You can use Application.Run for that: Run "OtherWorkbook.xls!MyOtherMacro" A: Instead of trying to programatically click the button, it is possible to run the macro linked to the button directly from your code. First you need to find the name of the macro that is run when the button is clicked. To do this, open the workbook that contains the command button. Right click on the command button and select 'Assign macro' The 'Assign macro' dialog will be displayed. Make a note of the full name in the 'Macro name' box at the top of the dialog. Click on the OK button. Your code in the workbook that needs to call the code should be as follows. Sub Run_Macro() Workbooks.Open Filename:="C:\Book1.xls" 'Open the workbook containing the command button 'Change the path and filename as required Application.Run "Book1.xls!Macro1" 'Run the macro 'Change the filename and macro name as required 'If the macro is attached to a worksheet rather than a module, the code would be 'Application.Run "Book1.xls!Sheet1.Macro1" End Sub A: You sure you mean workbook and not sheet? Anyways if you "want to loop through all workbooks in a folder and perform an operation on each of them" If you want to access another sheet it's done like this: Worksheets("MySheet").YourMethod() A: NOPE: This is EASY to do: * *In the worksheet ABC where you have the Private Sub xyz_Click(), add a new public subroutine: Public Sub ForceClickOnBouttonXYZ() Call xyz_Click End Sub *In your other worksheet or module, add the code: Sheets("ABC").Activate Call Sheets("ABC").ForceClickOnBouttonXYZ THAT IS IT!!! If you do not want the screen to flicker or show any activity, set the Application.ScreenUpdating = False before you call the Sheets("ABC").ForceClickOnBouttonXYZ routine and then set Application.ScreenUpdating = True right after it. A: I had a similar problem as above and it took a while to figure out but this is all I had to do. On Sheet1 I created a button with the following: Private Sub cmdRefreshAll_Click() Dim SheetName As String SheetName = "Mon" Worksheets(SheetName).Activate ActiveSheet.cmdRefresh_Click SheetName = "Tue" Worksheets(SheetName).Activate ActiveSheet.cmdRefresh_Click ' "I repeated the above code to loop through a worksheet for every day of the week" End Sub Then the importing bit to get it to work you need to go to the code on the other worksheets and change it from Private to Public. So far no bugs have popped up. A: There's not a clean way to do this through code, since the button's click event would typically be a private method of the other workbook. However, you can automate the click through VBA code by opening the workbook, finding the control you want, and then activating it and sending it a space character (equivalent to pressing the button). This is almost certainly a terrible idea, and it will probably bring you and your offspring nothing but misery. I'd urge you to find a better solution, but in the meantime, this seems to work... Public Sub RunButton(workbookName As String, worksheetName As String, controlName As String) Dim wkb As Workbook Dim wks As Worksheet Set wkb = Workbooks.Open(workbookName) wkb.Activate Dim obj As OLEObject For Each wks In wkb.Worksheets If wks.Name = worksheetName Then wks.Activate For Each obj In wks.OLEObjects If (obj.Name = controlName) Then obj.Activate SendKeys (" ") End If Next obj End If Next wks End Sub
{ "language": "en", "url": "https://stackoverflow.com/questions/130166", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: config file syntax for configuring WCF Webservice Client Target EndPoint What is the web config syntax for specifying a WCF WebService Proxy's Default Target Endpoint? Specifically, I'm trying to configure the address that the client uses for locating the .asmx of the webservice A: Nevermind - found it. The answer is: Set the address attribute of the endpoint element. I.E. <endpoint address="http://fooland.com/bar.asmx" ... /> For anyone else who's challenged at searching MSDN like myself, the rest of the documentation for configuring client endpoints can be found at: http://msdn.microsoft.com/en-us/library/ms731762(VS.85).aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/130169", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }