text
stringlengths 8
267k
| meta
dict |
|---|---|
Q: Could not load type from assembly error I have written the following simple test in trying to learn Castle Windsor's Fluent Interface:
using NUnit.Framework;
using Castle.Windsor;
using System.Collections;
using Castle.MicroKernel.Registration;
namespace WindsorSample {
public class MyComponent : IMyComponent {
public MyComponent(int start_at) {
this.Value = start_at;
}
public int Value { get; private set; }
}
public interface IMyComponent {
int Value { get; }
}
[TestFixture]
public class ConcreteImplFixture {
[Test]
public void ResolvingConcreteImplShouldInitialiseValue() {
IWindsorContainer container = new WindsorContainer();
container.Register(Component.For<IMyComponent>().ImplementedBy<MyComponent>().Parameters(Parameter.ForKey("start_at").Eq("1")));
IMyComponent resolvedComp = container.Resolve<IMyComponent>();
Assert.AreEqual(resolvedComp.Value, 1);
}
}
}
When I execute the test through TestDriven.NET I get the following error:
System.TypeLoadException : Could not load type 'Castle.MicroKernel.Registration.IRegistration' from assembly 'Castle.MicroKernel, Version=1.0.3.0, Culture=neutral, PublicKeyToken=407dd0808d44fbdc'.
at WindsorSample.ConcreteImplFixture.ResolvingConcreteImplShouldInitialiseValue()
When I execute the test through the NUnit GUI I get:
WindsorSample.ConcreteImplFixture.ResolvingConcreteImplShouldInitialiseValue:
System.IO.FileNotFoundException : Could not load file or assembly 'Castle.Windsor, Version=1.0.3.0, Culture=neutral, PublicKeyToken=407dd0808d44fbdc' or one of its dependencies. The system cannot find the file specified.
If I open the Assembly that I am referencing in Reflector I can see its information is:
Castle.MicroKernel, Version=1.0.3.0, Culture=neutral, PublicKeyToken=407dd0808d44fbdc
and that it definitely contains Castle.MicroKernel.Registration.IRegistration
What could be going on?
I should mention that the binaries are taken from the latest build of Castle though I have never worked with nant so I didn't bother re-compiling from source and just took the files in the bin directory. I should also point out that my project compiles with no problem.
A: I was getting this error and nothing I found on StackOverflow or elsewhere solved it, but bmoeskau's answer to this question pointed me in the right direction for the fix, which hasn't been mentioned yet as an answer. My answer isn't strictly related to the original question, but I'm posting it here under the assumption that someone having this problem will find there way here through searching Google or something similar (like myself one month from now when this bites me again, arg!).
My assembly is in the GAC, so there is theoretically only one version of the assembly available. Except IIS is helpfully caching the old version and giving me this error. I had just changed, rebuilt and reinstalled the assembly in the GAC. One possible solution is to use Task Manager to kill w3wp.exe. This forces IIS to reread the assembly from the GAC: problem solved.
A: I had this issue after factoring a class name:
Could not load type 'Namspace.OldClassName' from assembly 'Assembly name...'.
Stopping IIS and deleting the contents in Temporary ASP.NET Files fixed it up for me.
Depeding on your project (32/64bit, .net version, etc) the correct Temporary ASP.NET Files differs:
*
*64 Bit
%systemroot%\Microsoft.NET\Framework64\{.netversion}\Temporary ASP.NET Files\
*32 Bit
%systemroot%\Microsoft.NET\Framework\{.netversion}\Temporary ASP.NET Files\
*On my dev machine it was (Because its IIS Express maybe?)
%temp%\Temporary ASP.NET Files
A: If this error is caused by changing the namespace, make sure that the folder of that project is renamed to the same name,
and close VS.NET
Edit the project which has the problem with Notepad and replace their nodes
"RootNamespace>New_Name_Of_Folder_Of_Your_Project_Namespace"RootNamespace>
"AssemblyName>New_Name_Of_Folder_Of_Your_Project_Namespace"AssemblyName>
A: When I run into such problem, I find FUSLOGVW tool very helpful. It is checking assembly binding information and logs it for you. Sometimes the libraries are missing, sometimes GAC has different versions that are being loaded. Sometimes the platform of referenced libraries is causing the problems. This tool makes it clear how the dependencies' bindings are being resolved and this may really help you to investigate/debug your problem.
Fusion Log Viewer / fuslogvw / Assembly Binding Log Viewer. Check more/download here: http://msdn.microsoft.com/en-us/library/e74a18c4.aspx.
A: Version=1.0.3.0 indicates Castle RC3, however the fluent interface was developed some months after the release of RC3. Therefore, it looks like you have a versioning problem. Maybe you have Castle RC3 registered in the GAC and it's using that one...
A: I get this occasionally and it's always been down to have the assembly in the GAC
A: Just run into this with another cause:
I was using a merged assembly created with ILRepack. The assembly you are querying the types from must be the first one passed to ILRepack or its types will not be available.
A: Deleting my .pdb file for the dll solved this issue for me. I'm guessing it has something to do with the fact that the dll was created using ILMerge.
A: Maybe not as likely, but for me it was caused by my application trying to load a library with the same assembly name (xxx.exe loading xxx.dll).
A: This usually happens when you have one version of your assembly deployed in the GAC but has not been updated with new classes that you might have added on the assembly on your IDE. Therefore make sure that the assembly on the GAC is updated with changes you might have made in your project.
E.g. if you have a Class Library of Common and in that Class Library you have Common.ClassA type and deploy this to the GAC strongly-named. You come later and add another type called Common.ClassB and you run your code on your IDE without first deploying the changes you made to the GAC of Common with the newly added Common.ClassB type.
A: Just run into this with another cause:
running unit tests in release mode but the library being loaded was the debug mode version which had not been updated
A: Yet another solution: Old DLLs pointing to each other and cached by Visual Studio in
C:\Users\[yourname]\AppData\Local\Microsoft\VisualStudio\10.0\ProjectAssemblies
Exit VS, delete everything in this folder and Bob's your uncle.
A: I had the same issue. I just resolved this by updating the assembly via GAC.
To use gacutil on a development machine go to:
Start -> programs -> Microsoft Visual studio 2010 -> Visual Studio Tools -> Visual Studio Command Prompt (2010).
I used these commands to uninstall and Reinstall respectively.
gacutil /u myDLL
gacutil /i "C:\Program Files\Custom\mydllname.dll"
Note: i have not uninstall my dll in my case i have just updated dll with current path.
A: I ran into this scenario when trying to load a type (via reflection) in an assembly that was built against a different version of a reference common to the application where this error popped up.
As I'm sure the type is unchanged in both versions of the assembly I ended up creating a custom assembly resolver that maps the missing assembly to the one my application has already loaded. Simplest way is to add a static constructor to the program class like so:
using System.Reflection
static Program()
{
AppDomain.CurrentDomain.AssemblyResolve += (sender, e) => {
AssemblyName requestedName = new AssemblyName(e.Name);
if (requestedName.Name == "<AssemblyName>")
{
// Load assembly from startup path
return Assembly.LoadFile($"{Application.StartupPath}\\<AssemblyName>.dll");
}
else
{
return null;
}
};
}
This of course assumes that the Assembly is located in the startup path of the application and can easily be adapted.
A: I ended up having an old reference to a class (an HttpHandler) in web.config that was no longer being used (and was no longer a valid reference). For some reason it was ignored while running in Studio (or maybe I have that class still accessible within my dev setup?) and so I only got this error once I tried deploying to IIS.
I searched on the assembly name in web.config, removed the unused handler reference, then this error went away and everything works great.
A: I had the same issue and for me it had nothing to do with namespace or project naming.
But as several users hinted at it had to do with an old assembly still being referenced.
I recommend to delete all "bin"/binary folders of all projects and to re-build the whole solution. This washed out any potentially outdated assemblies and after that MEF exported all my plugins without issue.
A: If you have one project referencing another project (such as a 'Windows Application' type referencing a 'Class Library') and both have the same Assembly name, you'll get this error. You can either strongly name the referenced project or (even better) rename the assembly of the referencing project (under the 'Application' tab of project properties in VS).
A: Is the assembly in the Global Assembly Cache (GAC) or any place the might be overriding the assembly that you think is being loaded? This is usually the result of an incorrect assembly being loaded, for me it means I usually have something in the GAC overriding the version I have in bin/Debug.
A: You might be able to resolve this with binding redirects in *.config. http://blogs.msdn.com/b/dougste/archive/2006/09/05/741329.aspx has a good discussion around using older .net components in newer frameworks.
http://msdn.microsoft.com/en-us/library/eftw1fys(vs.71).aspx
A: I got the same error after updating a referenced dll in a desktop executable project. The issue was as people here mentioned related an old reference and simple to fix but hasn’t been mentioned here, so I thought it might save other people’s time.
Anyway I updated dll A and got the error from another referenced dll let’s call it B here where dll A has a reference to dll B.
Updating dll B fixed the issue.
A: Adding your DLL to GAC(global assembly cache)
Visual Studio Command Prompt => Run as Adminstrator
gacutil /i "dll file path"
You can see added assembly C:\Windows\system32\
It Will also solve dll missing or "Could not load file or assembly" in SSIS Script task
A: I experienced a similar issue in Visual Studio 2017 using MSTest as the testing framework. I was receiving System.TypeLoadException exceptions when running some (not all) unit tests, but those unit tests would pass when debugged. I ultimately did the following which solved the problem:
*
*Open the Local.testsettings file in the solution
*Go to the "Unit Test" settings
*Uncheck the "Use the Load Context for assemblies in the test directory." checkbox
After taking these steps all unit tests started passing when run.
A: I experienced the same as above after removing signing of assemblies in the solution. The projects would not build.
I found that one of the projects referenced the StrongNamer NuGet package, which modifies the build process and tries to sign non-signed Nuget packages.
After removing the StrongNamer package I was able to build the project again without signing/strong-naming the assemblies.
A: If this is a Windows app, try checking for a duplicate in the Global Assembly Cache (GAC). Something is overriding your bin / debug version.
If this is a web app, you may need to delete on server and re-upload. If you are publishing you may want to check the Delete all existing files prior to publish check box. Depending on Visual Studio version it should be located in Publish > Settings > File Publish Options
A: I want to mention additional things and make a conclusion.
Commonly, As Other ones explained, This exception will raise when there is a problem with loading the expected type (in an existing assembly) in runtime. More specifically, Whenever there is a loaded assembly that is stale (and not contains the requiring type). Or when an assembly with a different version or build is already loaded that not contains the expected type.
Also, I must mention that It seems Rare, But It is possible to be a runtime limitation or a compiler bug (suppose the compiler is not negated about something and just compiled problematic code), But runtime throws a System.TypeLoadException exception. Yes; this actually occurred to me!
An Example (which occurred for me)
Consider a struct that defines a nullable field of its own type inside itself. Defining such a filed non-nullable, will take you a compile-time error and prevents you from building (Obviously This behaviour has a logical reason). But what about nullable struct files? Actually, nullable value-types are Nullable<T> behind. As the c# compiler did not prevent me from defining it in this way, I tried it and built the project. But I get runtime exception System.TypeLoadException: 'Could not load type 'SomeInfo' from the assembly ... ', And It seems to be a problem in loading the part of my code (otherwise, we may say: the compiler not truly performed the compilation process at least) for me in my environment:
public struct SomeInfo
{
public SomeInfo? Parent { get; }
public string info { get; }
}
My environment specs:
*
*Visual Studio 2019 16.4.4
*Language Version: c# 7.3
*Project type: netstandard 2.0.3 library
*Runtime: classic DotNetFramework 4.6.1
(I checked and ensured that the compiled assembly is up to date And actually is newly compiled. Even, I threw away all bin directories and refreshed the whole project. Even though I created a new project in a fresh solution and tested it, the struct generates the exception.)
It seems to be a runtime limitation (logically or technically) or a bug or the same problem else because Visual Studio not prevents me from compiling Also other newer parts of my codes (excluding this struct) are executing fine.
Changing the struct to a class, the compiled assembly contains and executes the type as well.
I have no idea to explain in detail why this behaviour occurs in my environment, But I faced this situation.
Conclusion
Check the situations:
*
*When the same (probably older) assembly already exists in GAC that is overriding the referenced assembly.
*When re-compilation was needed but not performed automatically (therefore the referenced assembly is not updated) and there is a needs to perform build manually or fix solution build configuration.
*When a custom tool or middleware or third-party executable (such as tests runner tool or IIS) loaded an older version of that assembly from cached things and there is a need to clean some things up or causing to reset some things.
*When an Unappropriated Configuration caused demanding a no longer existing type by a custom tool or middleware or a third-party executable that loads the assembly and there is a need to update the configuration or clean some things up (such as removing an HTTP handler in web.config file on IIS deployment as @Brian-Moeskau said)
*When there is a logical or technical problem for the runtime to execute the compiled assembly, (for example when the compiler compiled a problematic code that the in-using runtime cannot understand or execute), as I faced.
A: Usually the dll is not found, or the dll exists, but the new class cannot be found.
*
*if the dll does not exist, update the dll directly
*If there is a dll, take it down and use ILSPY to see if there is a class in it that reports an error.
If not, rebuild the dll and upload it again.
A: I tried most of the above, but none of them worked for me.
Here was my issue.
I moved a c# project from my local machine (Windows 10) to the server (Windows Server 2008 R2). I started getting the 'could not load assembly' error when trying to run tests from my UnitTest project.
Here is my solution.
I removed the test project from the solution which was a Unit Test Project .NET Framework. I also made sure to delete the project. I then created a new MSTest Test Project .NET Core project.
A: I got this error when trying to load a Blazor component in a .NET 5 C# MVC solution that broke out the Blazor components into its own project in the solution:
My DTO objects are used for specifically for the Blazor stuff and has no ref to AspNetCore.Mvc stuff (as that breaks Blazor).
So strangely, when I try to load the component with just the object I need, like this (in the MVC View):
<component type="typeof(My.Client.Components.MyComponent)" render-mode="WebAssemblyPrerendered"
param-OpenYearMonth="Model.OpenYearMonth?.ToDTO()" />
I get the same error as the OP in my browser console. However, for some strange reason, just adding an extra dummy parameter causes it to load just fine:
<component type="typeof(My.Client.Components.MyComponent)" render-mode="WebAssemblyPrerendered"
param-Whatever="new List<My.DTO.Models.Whatever>()"
param-OpenYearMonth="Model.OpenYearMonth?.ToDTO()" />
I think the problem is that the component is being rendered without the DTO assembly being loaded? Maybe? It doesn't make much sense and certainly seems like a bug in the MVC or Blazor... but I guess at least there is some work around. I also tried doing some Task.Delay experiments, thinking maybe the DTO assembly was still being loaded or something but that didn't help. I also tried creating a local variable in the View with this Model.OpenYearMonth?.ToDTO() and passing that local variable to the component... again no help. Of course I also tried all of the solutions in the answers of this post and none of that helped. Don't ask how many hours this issue wasted...
A: Had a similar System.TypeLoadException : Could not load type 'referenced assembly'.'non-existing class'Extension.
I had added a new String extension to the referenced assembly, deployed it along with a second assembly which had also changed (and used the referenced assembly) to the web application bin folder.
Although the web application used the referenced assembly it did not use the new extension method. I understood I retained binary compatibility by not changing the assembly version (just the file version) of the referenced assembly and assumed it would be loaded by the web application assembly.
It appears that the web application assembly would still not load the newer file version of the referenced assembly from its bin folder.
To fix I simply recompiled the web application assembly and redeployed it.
A: I just resolved this by running the iisreset command using the command prompt...
Always the first thing I do when I get such errors.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124841",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "153"
}
|
Q: Flags in a database rows, best practices I am asking this out of a curiosity. Basically my question is when you have a database which needs a row entry to have things which act like flags, what is the best practice? A good example of this would be the badges on stack overflow, or the operating system field in bugzilla. Any subset of the flags may be set for a given entry.
Usually, I do c and c++ work, so my gut reaction is to use an unsigned integer field as a set of bits which can be flipped... But i know that isn't a good solution for several reasons. The most obvious of which is scale-ability, there will be a hard upper limit on how many flags I can have.
I can also think of a couple of other solutions which scale better but would have performance issues because they would require multiple selects to get all the information.
So, what is the "right" way to do this?
A: For many cases, it depends on a lot of things - like your database backend. If you're using MySQL, for example, the SET datatype is exactly what you want.
Basically, it's just a bitmask, with values assigned to each bit. MySQL supports up to 64-bit values (meaning 64 different toggles). If you only need 8, then it only takes a byte per row, which is pretty awesome savings.
If you honestly have more than 64 values in a single field, your field might be getting more complicated. You may want to expand then to the BLOB datatype, which is just a raw set of bits that MySQL has no inherent understanding of. Using this, you can create an arbitrary number of bit fields that MySQL is happy to treat as binary, hex, or decimal values, however you need. If you need more than 64 options, create as many fields as is appropriate for your application. The downside is that is is difficult to make the field human readable. The BIT datatype is also limited to 64.
A: A Very Relational Approach
For databases without the set type, you could open a new table to represent the set of entities for which each flag is set.
E.g. for a Table "Students" you could have tables "RegisteredStudents", "SickStudents", TroublesomeStudents etc. Each table will have only one column: the student_id. This would actually be very fast if all you want to know is which students are "Registered" or "Sick", and would work the same way in every DBMS.
A: If the flags have very different meanings and are used directly in SQL queries or VIEWS, then using multiple columns of type BOOLEAN might be a good idea.
Put each flag into an extra column, because you'll read and modify them separately anyway. If you want to group the flags, just give their column names a common prefix, i.e. instead of:
CREATE TABLE ... (
warnings INTEGER,
errors INTEGER,
...
)
you should use:
CREATE TABLE ... (
warning_foo BOOLEAN,
warning_bar BOOLEAN,
warning_...
error_foo BOOLEAN,
error_bar BOOLEAN,
error_... BOOLEAN,
...
)
Although MySQL doesn't have a BOOLEAN type, you can use the quasi standard TINYINT(1) for that purpose, and set it only to 0 or 1.
A: Generally speaking, I avoid bitmask fields. They're difficult to read in the future and they require a much more in-depth knowledge of the data to understanding.
The relational solution has been proposed previously. Given the example you outlined, I would create something like this (in SQL Server):
CREATE TABLE Users (
UserId INT IDENTITY(1, 1) PRIMARY KEY,
FirstName VARCHAR(50),
LastName VARCHAR(50),
EmailAddress VARCHAR(255)
);
CREATE TABLE Badges (
BadgeId INT IDENTITY(1, 1) PRIMARY KEY,
[Name] VARCHAR(50),
[Description] VARCHAR(255)
);
CREATE TABLE UserBadges (
UserId INT REFERENCES Users(UserId),
BadgeId INT REFERENCES Badges(BadgeId)
);
A: If you really need an unbounded selection from a closed set of flags (e.g. stackoverflow badges), then the "relational way" would be to create a table of flags and a separate table which relates those flags to your target entities. Thus, users, flags and usersToFlags.
However, if space efficiency is a serious concern and query-ability is not, an unsigned mask would work almost as well.
A: I would recommend using a BOOLEAN datatype if your database supports this.
Otherwise, the best approach is to use NUMBER(1) or equivalent, and put a check constraint on the column that limits valid values to (0,1) and perhaps NULL if you need that. If there is no built-in type, using a number is less ambiguous that using a character column. (What's the value for true? "T" or "Y" or "t")
The nice thing about this is that you can use SUM() to count the number of TRUE rows.
SELECT COUNT(1), SUM(ActiveFlag)
FROM myusers;
A: If there are more than just a few flags, or likely to be so in the future, I'll use a separate table of flags and a many-to-many table between them.
If there are a handful of flags and I'm never going to use them in a WHERE, I'll use a SET() or bitfield or whatever. They're easy to read and more compact, but a pain to query and sometimes even more of a headache with an ORM.
If there are only a few flags -- and only ever going to be a few flags -- then I'll just make a couple BIT/BOOLEAN/etc columns.
A: Came across this when I was pondering best way to store bitmask flags (similar to OP's original use of integers) in a database.
The other answers are all valid solutions, but I think its worth mentioning that you may not have to resign yourself to horrible query problems if you choose to store bitmasks directly in the database.
If you are working on an application that uses bitmasks and you really want the convenience of storing them in the database as one integer or byte column, go ahead and do that. Down the road, you can write yourself a little utility that will generate another table of flags (in whatever pattern of rows/columns you choose) from the bitmasks in your primary working table. You can then do ordinary SQL queries on that computed/derived table.
This way your application gets the convenience of only reading/writing the bitmask field/column. But you can still use SQL to really dive into your data if that becomes necessary at a later time.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124844",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "37"
}
|
Q: OpenGL still better than Direct3D for non-games? The standard model has been that OpenGL is for professional apps (CAD) and Direct3D is for games.
With the debacle of openGL 3.0, is openGl still the natural choice for technical 3D apps (cad/GIS)?
Are there scenegraph libraries for Direct3D?
(Of course Direct3D is windows only.)
A: This doesn't directly answer your question (sorry), but I was one of the original guys working on Direct3D and hehe:
Are there scenegraph libraries for
Direct3D?
Direct3D (nee Reality Lab) used to just be a scenegraph library :-)
Shame that Direct3D Retained Mode isn't shipped anymore...
A: D3D makes you pay the Microsoft "strategy tax." That is, D3D serves two masters. One is giving you features and performance. The other is to ensure lock-in to other MS products and the Windows platform generally. This has some consequences for you:
*
*A D3D app won't run on anything but Windows (including Xbox). Maybe you don't think that's important now. But if, down the road, you want to run on Mac, Linux, PS3, future consoles, etc., you may be glad you chose the platform-independent choice.
*MS can make some arbitrary decisions. Will the next version of D3D only run on an OS that requires new hardware, is expensive, and lots of people don't want to upgrade to? Will they make some other future decision you don't agree with?
*Historically, OpenGL has led D3D in quick exposure of new HW features. This is because there's a mechanism in the standard for vendors to add their own extensions, and for those extensions to eventually be folded into the main spec. D3D is whatever MS wants it to be, with input from vendors to be sure, but MS gets veto power. You could easily be in a situation like with Vista, where MS decided not to expose new HW features to the old DX, and only make the new DX available on Vista. This was quite a headache for game developers.
Now then, this is the flavor of reasons why a "professional app" (CAD, animation, scientific visualization, GIS, etc.) would favor OGL -- apps like this want to be stable for many years, need ongoing maintenance and improvement, and want to run on many platforms. This is in contrast to games, which quite frequently are only on one platform, will be released but generally not "maintained" (there likely won't be a 2.0, an update for another OS three years hence, don't need to support older HW, etc.). Games want maximum performance and only need to work for a short time window and on a fixed number of platforms. If they need to target Windows anyway and D3D is a little faster, that may be the right choice since the negative D3D consequences won't hurt them like it would for a CAD app, say.
A: Direct3D is only available on Windows and XBox. If you plan on targeting Unix or Mac, in addition to Windows, OpenGL is a good choice.
A: To me, speaking as a graphics programmer in a large CAD company, there are currently three things that retain OpenGL of beeing dropped in favor of Direct3D (10/11) for existing CAD/DCC applications:
*
*Legacy. Most of CAD softwares
are more or less hardly tight to
OpenGL concepts. Rewriting everything
around D3D philosophy is not always
manageable (technicaly speaking
and depending on the ability/willingness of the
company to throw resources at it).
Secondly it could have negative
impacts if something goes wrong
(functionnality speaking and/or
performance wise) and company would just not take the risk of delaying a major release because of an API and architecture switch.
*Peoples. Peoples in those old-boy industries
are fairly conservative. They prefer
spending money/time on dealing with
IHVs regarding all OpenGL driver
issues they could have or simply won't recommend using graphics from a specific IHV (e.g. Intel
or ATI).
Plus, I believe that the automotive/aerospace industry customers
like Boeing, Airbus, TMC, BMW, etc
don't care that much of having
Quadros only workstation. Hardware
is still cheap in regard to software
licence prices.
*Future trends. With future
generation of processors like Llano,
Larrabee, Fermi and products that
will follow, it's definitely worth
to invest R&D budgets on how to
program and develop new
languages / apis / frameworks for those future hardwares (in term of
graphics as well as non-graphics tasks).
CAD industry has huge
cycles and won't move if not for a truly disruptive technology. So D3D might
just come a bit late for big CAD players (except Autodesk of course which is a particular case).
A: Like always, this depends on your situation.
In my experience, currently (2008) OpenGL driver quality on Windows is much worse than Direct3D driver quality. If your situation is such that you can't reasonably demand your customers to always have up-to-date drivers, or tell them to change their graphics cards to the ones that have better OpenGL drivers, then OpenGL is a pretty bad choice. In this case, I'd go for D3D renderer on Windows, and OpenGL renderer on OS X/Linux (if you have to support those platforms, that is).
Having two renderers is not that hard; in my experience working around driver bugs takes much more time than writing and supporting the rendering code paths.
Of course, there are some (specific) situations where D3D just does not have the features needed, e.g. quad-buffered genlocked output; or geometry shader support on Windows XP.
So in short: if you want better drivers on Windows, use D3D. If you don't care about driver quality much, or need features that are in OpenGL but not in D3D, then use OpenGL.
A: Not an answer as such but is interesting to note the latest versions of AutoCAD give you a choice between using OpenGL or Direct3D drivers in the "3dconfig" setup option. Fundamentally, AutoCAD is a Windows-only app so it made sense for them to (finally) support Direct3D. See page 15 of this Whitepaper from AutoDesk for more info.
A: The answer everyone wants to believe is YES but the real answer is, it depends.
About 50% of all hardware out there will not run OpenGL to any reasonable level. To see an example of this read the FAQ for Google Sketchup which uses OpenGL
SketchUp's performance relies heavily the graphics card driver and it's ability to support OpenGL 1.5 or higher. Historically, people have seen problems with ATI Radeon cards and Intel based cards with SketchUp. We don't recommend using these graphics cards with SketchUp at this time. (Hardware ans software requirements for Google Sketchup
So, If you're writing a CAD app and you don't mind telling your customers, "you must have an OpenGL complient video card" then the answer is yes.
If instead you are creating an end-user app (Google Earth for example) then the sad answer is you'll have to write both a D3D and GL version of your app if you want to reach the entire market.
A: Perhaps you should try an abstraction layer such as OGRE, which lets you switch between DirectX and OpenGL without having to rewrite anything? It also adds a lot of functionality, and isn't game-oriented, but pretty generic.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124851",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
}
|
Q: How can I select an element programmatically using JavaScript? I have an <img> in an HTML document that I would like to highlight as though the user had highlighted it using the mouse. Is there a way to do that using JavaScript?
I only need it to work in Mozilla, but any and all information is welcome.
EDIT: The reason I want to select the image is actually not so that it appears highlighted, but so that I can then copy the selected image to the clipboard using XPCOM. So the img actually has to be selected for this to work.
A: Here's an example which selects the first image on the page (which will be the Stack Overflow logo if you test it out on this page in Firebug):
var s = window.getSelection()
var r = document.createRange();
r.selectNode(document.images[0]);
s.addRange(r)
Relevant documentation:
*
*http://developer.mozilla.org/en/DOM/window.getSelection
*http://developer.mozilla.org/en/DOM/range.selectNode
*http://developer.mozilla.org/en/DOM/Selection/addRange
A: You might also want to call s.removeAllRanges() before s.addRange(r).
A: What, exactly, are you trying to do? If you're using XPCOM, you're presumably writing an application or an extension for one; can't you just get the image data and put it on the clipboard directly?
A: My personal choice for selecting elements is jquery:
Then to get the element of your choice is:
$("img#YOURIMAGEHERE").focus();
A: Give the img tag an ID. Use document.getElementById('id').
<script type="text/javascript" language="javascript">
function highLight()
{
var img = document.getElementById('myImage');
img.style.border = "inset 2px black";
}
</script>
<img src="whatever.gif" id="myImage" onclick="hightLight()" />
EDIT::
You might try .focus to give it focus.
A: You can swap the source of the image, as in img.src = "otherimage.png";
I actually did this at one point, and there are things you can do to pre-load the images.
I even set up special attributes on the image elements such as swap-image="otherimage.png", then searched for any elements that had it, and set up handlers to automatically swap the images... you can do some fun stuff.
Sorry I misunderstood the question! but anyways, for those of you interested in doing what I am talking about, here is an example of what I mean (crude implementation, I would suggest using frameworks like jQuery to improve it, but just something to get you going):
<html>
<body>
<script language="javascript">
function swap(name) {
document.getElementById("image").src = name;
}
</script>
<img id="image" src="test1.png"
onmouseover="javascript:swap('test0.png');"
onmouseout="javascript:swap('test1.png');">
</body>
</html>
A: The basic idea of the "highLight" solution is ok, but you probably want to set a "static" border style (defined in css) for the img with the same dimensions as the one specified in the highLight method, so it doesn't cause a resize.
In addition, I believe that if you change the call to "highLight(this)", and the function def to "highLight(obj)", then you can skip the "document.getElementById()" call (and the specification of the "id" attribute for "img"), as long as you do "obj.style.border" instead.
You probably also need to spell "highLight" correctly.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124854",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: How do I prevent a class from being allocated via the 'new' operator? (I'd like to ensure my RAII class is always allocated on the stack.) I'd like to ensure my RAII class is always allocated on the stack.
How do I prevent a class from being allocated via the 'new' operator?
A: I'm not convinced of your motivation.
There are good reasons to create RAII classes on the free store.
For example, I have an RAII lock class. I have a path through the code where the lock is only necessary if certain conditions hold (it's a video player, and I only need to hold the lock during my render loop if I've got a video loaded and playing; if nothing's loaded, I don't need it). The ability to create locks on the free store (with an unique_ptr) is therefore very useful; it allows me to use the same code path regardless of whether I have to take out the lock.
i.e. something like this:
unique_ptr<lock> l;
if(needs_lock)
{
l.reset(new lock(mtx));
}
render();
If I could only create locks on the stack, I couldn't do that....
A: All you need to do is declare the class' new operator private:
class X
{
private:
// Prevent heap allocation
void * operator new (size_t);
void * operator new[] (size_t);
void operator delete (void *);
void operator delete[] (void*);
// ...
// The rest of the implementation for X
// ...
};
Making 'operator new' private effectively prevents code outside the class from using 'new' to create an instance of X.
To complete things, you should hide 'operator delete' and the array versions of both operators.
Since C++11 you can also explicitly delete the functions:
class X
{
// public, protected, private ... does not matter
static void *operator new (size_t) = delete;
static void *operator new[] (size_t) = delete;
static void operator delete (void*) = delete;
static void operator delete[](void*) = delete;
};
Related Question: Is it possible to prevent stack allocation of an object and only allow it to be instiated with ‘new’?
A: @DrPizza:
That's an interesting point you have. Note though that there are some situations where the RAII idiom isn't necessarily optional.
Anyway, perhaps a better way to approach your dilemma is to add a parameter to your lock constructor that indicates whether the lock is needed. For example:
class optional_lock
{
mutex& m;
bool dolock;
public:
optional_lock(mutex& m_, bool dolock_)
: m(m_)
, dolock(dolock_)
{
if (dolock) m.lock();
}
~optional_lock()
{
if (dolock) m.unlock();
}
};
Then you could write:
optional_lock l(mtx, needs_lock);
render();
A: In my particular situation, if the lock isn't necessary the mutex doesn't even exist, so I think that approach would be rather harder to fit.
I guess the thing I'm really struggling to understand is the justification for prohibiting creation of these objects on the free store.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124856",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "51"
}
|
Q: Options for Dynamic content in ASP.Net What choices do I have for creating stateful dynamic content in an ASP.Net web site?
Here's my scenario. I have a site that has multiple, nested content regions. The top level are actions tied to a functional area Catalog, Subscriptions, Settings.
When you click on the functional action, I want to dynamically add content specific to that action. For example, when Catalog is clicked, I want to display a tree with the catalog folders & files, and a region to the right for details.
When a user clicks on the tree, I want a context sensitive details to load in the details region (like properties or options to manage the files).
I started with UserControls. They worked fine as long as I kept loading everything into the page, and never let one disappear. As soon as one disappeared, ViewState for the page blew up because the view state tree was invalid.
(I didn't want to keep loading stuff into my page because I don't want the responses to be too huge)
So, my next approach was to replace my dynamic regions with IFrames. Then instead of instantiating a UserControl, I would just change the source on my IFrame. Since the contents of the IFrames were independent pages I didn't run into any ViewState problems.
But, I'm concerned that IFrames might be a bad design choice, but don't fully understand why. The site is not public, so search engines aren't a concern.
So, finally to my question.
What are my options for this scenario? If I choose an Ajax Solution (jQuery), will I have to maintain my own ViewState? Are there any other considerations I should take into account?
A: Controls that are added dynamically do not persist in viewstate, and this is the reason that it doesn't matter if you use AJAX or iframes or whatever.
One possible work-around is to re-populate controls on postback. The problem with this, is the page life-cycle (simplified) is:
*
*Initialize
*
*LoadViewState
*Load Postback Data
*Call control Load events
*Call Load event
*Call control events
*Control PreRender
*PreRender
*SaveViewState
*Unload
What this means is the only place to re-add your dynamic controls is Initialize -- otherwise posted data (or viewstate information) is not loaded into that control. But often, because Viewstat/postback data isn't available yet in Initialize, your code doesn't have the information it needs to figure out which controls need to be added.
The only other work-around I've found in this situation is to use a 3rd party control called DynamicControlsPlaceholder. This works quite well, and persists the control information in viewstate.
In your particular case, it doesn't seem like there are that many choices/cases. Is it practical just to have all the different sets of controls in the page, and put them inside of asp:placeholder controls, and then just set one to visible, depending on what is selected?
A: you've got a number of different options, and yes, IFrames were a bad design choice.
The first option is the AJAX solution. And with that there's not really a viewstate scenario, it's just you're passing data back and forth with the webserver, building the UI on the fly as needed.
The next option is to dynamically add the controls you need for a given post, everytime. The way this would work, is that at the start of the page life cycle, you'd need to rebuild the page exactly as it was sent out the last time, and then dump out all the unneeded controls, and build just those that want.
A third option would be to use Master pages. Your top level content could be on the Master page itself, and have links to various pages within the website.
I'm sure given enough time, I could come up with more, but these 3 appeared just from reading your problem.
A: Some other options:
*
*Content only appears to be dynamic. You load enough controls on the page to handle anything and only actually show what you need. This saves a lot of hassle messing with view state and such, but means your page has a bigger footprint.
*Add controls to the page dynamically. You've already been playing with this, so you've seen some of the issues here. Just remember that the place to create your dynamic controls for postbacks is in the Page_Init() event, and that if you want them to be stateful, you need to keep that state somewhere. I recommend a database.
A: dynamic controls and viewstate don't mix well, as noted above - but that is a Good Thing, because even if they did the viewstate for a complex dynamic page would get so bloated that performance would diminish to nil
use Ajax [I like AJAX PRO because it is very simple to use] and manage the page state yourself [in session, database tables, or whatever works for your scenario]. This will be a bit more complicated to get going, but the results will be efficient and responsive: each page can update only what needs to change, and you won't be blowing a giant viewstate string back and forth all the time
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124863",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: XML Schema (XSD) validation tool? At the office we are currently writing an application that will generate XML files against a schema that we were given. We have the schema in an .XSD file.
Are there tool or libraries that we can use for automated testing to check that the generated XML matches the schema?
We would prefer free tools that are appropriate for commercial use although we won't be bundling the schema checker so it only needs to be usable by devs during development.
Our development language is C++ if that makes any difference, although I don't think it should as we could generate the xml file and then do validation by calling a separate program in the test.
A: The online XML Schema Validator from DecisionSoft allows you to check an XML file against a given schema.
A: I use Xerces:
http://xerces.apache.org/xerces-c/
A: After some research, I think the best answer is Xerces, as it implements all of XSD, is cross-platform and widely used. I've created a small Java project on github to validate from the command line using the default JRE parser, which is normally Xerces. This can be used on Windows/Mac/Linux.
There is also a C++ version of Xerces available if you'd rather use that. The StdInParse utility can be used to call it from the command line. Also, a commenter below points to this more complete wrapper utility.
You could also use xmllint, which is part of libxml. You may well already have it installed. Example usage:
xmllint --noout --schema XSD_FILE XML_FILE
One problem is that libxml doesn't implement all of the specification, so you may run into issues :(
Alternatively, if you are on Windows, you can use msxml, but you will need some sort of wrapper to call it, such as the GUI one described in this DDJ article. However, it seems most people on Windows use an XML Editor, such as Notepad++ (as described in Nate's answer) or XML Notepad 2007 as suggested by SteveC (there are also several commercial editors which I won't mention here).
Finally, you'll find different programs will, unfortunately, give different results. This is largely due to the complexity of the XSD spec. You may want to test your schema with several tools.
UPDATE: I've expanded on this in a blog post.
A: xmlstarlet is a command-line tool which will do this and more:
$ xmlstarlet val --help
XMLStarlet Toolkit: Validate XML document(s)
Usage: xmlstarlet val <options> [ <xml-file-or-uri> ... ]
where <options>
-w or --well-formed - validate well-formedness only (default)
-d or --dtd <dtd-file> - validate against DTD
-s or --xsd <xsd-file> - validate against XSD schema
-E or --embed - validate using embedded DTD
-r or --relaxng <rng-file> - validate against Relax-NG schema
-e or --err - print verbose error messages on stderr
-b or --list-bad - list only files which do not validate
-g or --list-good - list only files which validate
-q or --quiet - do not list files (return result code only)
NOTE: XML Schemas are not fully supported yet due to its incomplete
support in libxml2 (see http://xmlsoft.org)
XMLStarlet is a command line toolkit to query/edit/check/transform
XML documents (for more information see http://xmlstar.sourceforge.net/)
Usage in your case would be along the lines of:
xmlstarlet val --xsd your_schema.xsd your_file.xml
A: An XML editor for quick and easy XML validation is available at http://www.xml-buddy.com
You just need to run the installer and after that you can validate your XML files with an easy to use desktop application or the command-line. In addition you also get support for Schematron and RelaxNG. Batch validation is also supported...
Update 1/13/2012: The command line tool is free to use and uses Xerces as XML parser.
A: I'm just learning Schema. I'm using RELAX NG and using xmllint to validate. I'm getting frustrated by the errors coming out of xmlllint. I wish they were a little more informative.
If there is a wrong attribute in the XML then xmllint tells you the name of the unsupported attribute. But if you are missing an attribute in the XML you just get a message saying the element can not be validated.
I'm working on some very complicated XML with very complicated rules, and I'm new to this so tracking down which attribute is missing is taking a long time.
Update: I just found a java tool I'm liking a lot. It can be run from the command line like xmllint and it supports RELAX NG: https://msv.dev.java.net/
A: I found this online validator from 'corefiling' quite useful -
http://www.corefiling.com/opensource/schemaValidate.html
After trying few tools to validate my xsd, this is the one which gave me detailed error info - so I was able to fix the error in schema.
A: For Windows there is the free XML Notepad 2007.
You can select XSD's for it to validate against
UPDATE: better yet, use Notepad++ with the XML Tools plugin
A: There's a plugin for Notepad++ called XML Tools that offers XML verification and validation against an XSD.
You can see how to use it here.
A: http://www.xmlvalidation.com/
(Be sure to check the " Validate against external XML schema" Box)
A: one great visual tool to validate and generate XSD from XML is IntelliJ IDEA, intuitive and simple.
A: You can connect your XML schema to Microsoft Visual Studio's Intellisense. This option gives you both real-time validation AND autocomplete, which is just awesome.
I have this exact scenario running on my free copy of Microsoft Visual C++ 2010 Express.
A: I tend to use xsd from Microsoft to help generate the xsd from a .NET file. I also parse out sections of the xml using xmlstarlet. The final free tool that would be of use to you is altovaxml, which is available at this URL: http://www.altova.com/download_components.html .
This allows me to scan all the xml files picking up which xsd to use by parsing the xml.
# Function:
# verifyschemas - Will validate all xml files in a configuration directory against the schemas in the passed in directory
# Parameters:
# The directory where the schema *.xsd files are located. Must be using dos pathing like: VerifySchemas "c:\\XMLSchemas\\"
# Requirements:
# Must be in the directory where the configuration files are located
#
verifyschemas()
{
for FILENAME in $(find . -name '*.xml' -print0 | xargs -0)
do
local SchemaFile=$1$(getconfignamefromxml $FILENAME).xsd
altovaxml /validate $FILENAME /schema $SchemaFile > ~/temp.txt 2> /dev/null
if [ $? -ne 0 ]; then
printf "Failed to verify: "
cat ~/temp.txt | tail -1 | tr -d '\r'
printf " - $FILENAME with $SchemaFile\n"
fi
done
}
To generate the xml I use:
xsd DOTNET.dll /type:CFGCLASS & rename schema0.xsd CFGCLASS.xsd
To get the xsd name I use:
xmlstarlet sel -t -m /XXX/* -v local-name() $1 | sed 's/ $//'
This allows me to pickup the correct XSD using an element tag within the xml file.
The net result is that I can call a bash function to scan all the XML files and verify them. Even if they are in multiple subdirectories.
A: Another online XML Schema (XSD) validator: http://www.utilities-online.info/xsdvalidation/.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124865",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "274"
}
|
Q: How does the .doc format work? I recently learned about the basic structure of the .docx file (it's a specially structured zip archive). However, docx is not formated like a doc.
How does a doc file work? What is the file format, structure, etc?
A: The basic idea behind the MS Word DOC format is an OLE Compund Document which, as Kibbee has already written, is basically a memory dump. It's a very complex and convoluted way to store documents, but if you've ever really dug into the application Word you'll know how insanely many features it has, and if you have used it in a business setting you'll have a good feeling for how it integrates with other programs in the Office series.
In general, OLE Compund Documents are very extensible structures that allows you to stuff all kinds of data into one file and even to some degree handle data you don't have an application installed for. For example, if you insert an Equation object (from the MS Equation Editor) into a document it gets stored as a sub-object which is like a file inside the file, but this object doesn't just contain the data required for Equation Editor to edit and render it, it also has a generic bitmap (or metafile, maybe) representation stored so it can be displayed, though not edited, on a machine without Equation Editor installed.
This was the why, for the how you'll have to read the specifications other people have linked to already ;)
If you want the easy way out to work with the files though, make sure your software runs on a Windows machine with Word installed, then use COM/OLE Automation to open and manipulate the documents. You won't have to worry about file format then.
A: It's not a direct answer to your question, but I highly recommend reading Joel Spolsky's article, Why are the Microsoft Office file formats so complicated? (And some workarounds). It will give you some insight into how complex the .doc format really is - and why. Joel also gives a very basic overview of what the .doc format consists of:
You see, Excel 97-2003 files are OLE compound documents, which are, essentially, file
systems inside a single file. These are sufficiently complicated that you have to read
another 9 page spec to figure that out. And these “specs” look more like C data
structures than what we traditionally think of as a spec. It's a whole hierarchical file
system.
(The quote refers to Excel files but it applies to Word docs as well). Informative article and helpful in understanding why .docx and ODF files are structured and designed so much more logically when being examined from an outside perspective.
A: The full format for binary .doc files is documented in this pdf from (the Wikipedia article on .doc)
A: The .doc format is quite complex. Like most Microsoft formats, it reflects a long history of changes between versions and legacy support. They published it not too long ago, so if you want to view it (and other pre-Office 2007 formats), knock yourself out here.
A: Doc is the binary format of word document - here's the Microsoft Office Word 97-2007 Binary File Format Specification [*.doc] document.
A: There's Microsoft Word's .doc and then there's plain text .doc. It sounds like you're wondering about the proprietary Microsoft format.
From Wikipedia:
The DOC format varies among Microsoft Office Word Formats. Word versions up to 97 used a different format from Microsoft Word version between 97 and 2003.
It wasn't until Word 2007 where .docx, although a packaged file, is not necessarily a .zip archive. It is a structured XML document.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124869",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
}
|
Q: What is Castle Windsor, and why should I care? I'm a long-time Windows developer, having cut my teeth on win32 and early COM. I've been working with .NET since 2001, so I'm pretty fluent in C# and the CLR. I'd never heard of Castle Windsor until I started participating in Stack Overflow. I've read the Castle Windsor "Getting Started" guide, but it's not clicking.
Teach this old dog new tricks, and tell me why I should be integrating Castle Windsor into my enterprise apps.
A: Mark Seemann wrote and excellent book on DI (Dependency Injection) which is a subset of IOC. He also compares a number of containers. I cannot recommend this book enough. The book's name is: "Dependency Injection in .Net" https://www.manning.com/books/dependency-injection-in-dot-net
A: Castle Windsor is an inversion of control tool. There are others like it.
It can give you objects with pre-built and pre-wired dependencies right in there. An entire object graph created via reflection and configuration rather than the "new" operator.
Start here: http://tech.groups.yahoo.com/group/altdotnet/message/10434
Imagine you have an email sending class. EmailSender. Imagine you have another class WorkflowStepper. Inside WorkflowStepper you need to use EmailSender.
You could always say new EmailSender().Send(emailMessage);
but that - the use of new - creates a TIGHT COUPLING that is hard to change. (this is a tiny contrived example after all)
So what if, instead of newing this bad boy up inside WorkflowStepper, you just passed it into the constructor?
So then whoever called it had to new up the EmailSender.
new WorkflowStepper(emailSender).Step()
Imagine you have hundreds of these little classes that only have one responsibility (google SRP).. and you use a few of them in WorkflowStepper:
new WorkflowStepper(emailSender, alertRegistry, databaseConnection).Step()
Imagine not worrying about the details of EmailSender when you are writing WorkflowStepper or AlertRegistry
You just worry about the concern you are working with.
Imagine this whole graph (tree) of objects and dependencies gets wired up at RUN TIME, so that when you do this:
WorkflowStepper stepper = Container.Get<WorkflowStepper>();
you get a real deal WorkflowStepper with all the dependencies automatically filled in where you need them.
There is no new
It just happens - because it knows what needs what.
And you can write fewer defects with better designed, DRY code in a testable and repeatable way.
A: I think IoC is a stepping stone in the right direction on the path towards greater productivity and enjoyment of development team (including PM, BA an BOs). It helps to establish a separation of concerns between developers and for testing. It gives peace of mind when architecting which allows for flexibility as frameworks may come in and out.
The best way to accomplish the goal that IoC (CW or Ninject etc..) takes a stab at is to eliminate politics #1 and #2 remove need for developers to put on the facade of false understanding when developing. Do these two solutions not seem related to IoC? They are :)
A: Castle Windsor is Dependency Injection container. It means with the help of this you can inject your dependencies and use them without creating them with the help of new keyword.
e.g. Consider you have written a repository or a service and you wish to use it at many places, you need to first register your service / repository and you can start using it after injecting it on the required place.
You can take a look at the below tutorial which I followed to learn castle windsor.
link.
Hope it will help you.
A: Put simply. Imagine you have some class buried in your code that needs a few simple config values to do its job. That means everything that creates an instance of that class needs to get those dependencies, so you usually end up having to refactor loads of classes along the way to just pass a bit of config down to where the instance gets created.
So either lots of classes are needlessly altered, you bunch the config values into one big config class which is also bad... or worst still go Service Locator!
IoC allows your class to get all its depencencies without that hassle, and manages lifetimes of instances more explicitly too.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124871",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "198"
}
|
Q: Is it possible to prevent stack allocation of an object and only allow it to be instantiated with 'new'? Is it possible to prevent stack allocation of an object and only allow it to be instiated with 'new' on the heap?
A: One way you could do this would be to make the constructors private and only allow construction through a static method that returns a pointer. For example:
class Foo
{
public:
~Foo();
static Foo* createFoo()
{
return new Foo();
}
private:
Foo();
Foo(const Foo&);
Foo& operator=(const Foo&);
};
A: The following allows public constructors and will stop stack allocations by throwing at runtime. Note thread_local is a C++11 keyword.
class NoStackBase {
static thread_local bool _heap;
protected:
NoStackBase() {
bool _stack = _heap;
_heap = false;
if (_stack)
throw std::logic_error("heap allocations only");
}
public:
void* operator new(size_t size) throw (std::bad_alloc) {
_heap = true;
return ::operator new(size);
}
void* operator new(size_t size, const std::nothrow_t& nothrow_value) throw () {
_heap = true;
return ::operator new(size, nothrow_value);
}
void* operator new(size_t size, void* ptr) throw () {
_heap = true;
return ::operator new(size, ptr);
}
void* operator new[](size_t size) throw (std::bad_alloc) {
_heap = true;
return ::operator new[](size);
}
void* operator new[](size_t size, const std::nothrow_t& nothrow_value) throw () {
_heap = true;
return ::operator new[](size, nothrow_value);
}
void* operator new[](size_t size, void* ptr) throw () {
_heap = true;
return ::operator new[](size, ptr);
}
};
bool thread_local NoStackBase::_heap = false;
A: In the case of C++11
class Foo
{
public:
~Foo();
static Foo* createFoo()
{
return new Foo();
}
Foo(const Foo &) = delete; // if needed, put as private
Foo & operator=(const Foo &) = delete; // if needed, put as private
Foo(Foo &&) = delete; // if needed, put as private
Foo & operator=(Foo &&) = delete; // if needed, put as private
private:
Foo();
};
A: This should be possible in C++20 using a destroying operator delete, see p0722r3.
#include <new>
class C
{
private:
~C() = default;
public:
void operator delete(C *c, std::destroying_delete_t)
{
c->~C();
::operator delete(c);
}
};
Note that the private destructor prevents it from being used for anything else than dynamic storage duration. But the destroying operator delete allows it to be destroyed via a delete expression (as the delete expression does not implicitly call the destructor in this case).
A: You could make the constructor private, then provide a public static factory method to create the objects.
A: You could create a header file that provides an abstract interface for the object, and factory functions that return pointers to objects created on the heap.
// Header file
class IAbstract
{
virtual void AbstractMethod() = 0;
public:
virtual ~IAbstract();
};
IAbstract* CreateSubClassA();
IAbstract* CreateSubClassB();
// Source file
class SubClassA : public IAbstract
{
void AbstractMethod() {}
};
class SubClassB : public IAbstract
{
void AbstractMethod() {}
};
IAbstract* CreateSubClassA()
{
return new SubClassA;
}
IAbstract* CreateSubClassB()
{
return new SubClassB;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124880",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "55"
}
|
Q: Access Intranet via SSL using WebBrowser Winforms Control I have a .Net 2.0 app that is used internally and we want to use the WebBrowser control to access some Web resources. We want to add encryption to these sites using SSL using self signed certificates.
My question is if there is way to disable all the warnings about the SSL keys coming from an untrusted source? I would like to avoid to have to install the keys in each of the workstations running the app.
Any other suggestions on how to do this are welcome.
A: I do not believe there is a work around for this, you will always get the warning when accessing the above mentioned web resources via the WebBrowser control (or Internet Explorer for that matter) You could however distribute the root cert via Group Policy.
A: You can do this by hooking dialogs (as someone above send the link) but then implementing SSL will be pointless. Because when an attack does the MITM attack you'll ignore the warnings and continue anyway. Better potion your installer might install the certificate in the first place.
A: I have a work around for this elventear. It is part of a Code Project article that I am writing that illustrates various 'tricks' you can perform when dealing with security and the WebBrowser Control. I will advise here when it is available.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124885",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How to get the application executable name in WindowsC++/CLI? I need to change the functionality of an application based on the executable name. Nothing huge, just changing strings that are displayed and some internal identifiers. The application is written in a mixture of native and .Net C++-CLI code.
Two ways that I have looked at are to parse the GetCommandLine() function in Win32 and stuffing around with the AppDomain and other things in .Net. However using GetCommandLine won't always work as when run from the debugger the command line is empty. And the .Net AppDomain stuff seems to require a lot of stuffing around.
So what is the nicest/simplest/most efficient way of determining the executable name in C++/CLI? (I'm kind of hoping that I've just missed something simple that is available in .Net.)
Edit: One thing that I should mention is that this is a Windows GUI application using C++/CLI, therefore there's no access to the traditional C style main function, it uses the Windows WinMain() function.
A: Use the argv argument to main:
int main(int argc, char* argv[])
{
printf("%s\n", argv[0]); //argv[0] will contain the name of the app.
return 0;
}
You may need to scan the string to remove directory information and/or extensions, but the name will be there.
A: Call GetModuleFileName() using 0 as a module handle.
Note: you can also use the argv[0] parameter to main or call GetCommandLine() if there is no main. However, keep in mind that these methods will not necessarily give you the complete path to the executable file. They will give back the same string of characters that was used to start the program. Calling GetModuleFileName(), instead, will always give you a complete path and file name.
A: Use __argv[0]
A: There is a static Method on Assembly that will get it for you in .NET.
Assembly.GetEntryAssembly().FullName
Edit: I didn't realize that you wanted the file name... you can also get that by calling:
Assembly.GetEntryAssembly().CodeBase
That will get you the full path to the assembly (including the file name).
A: Ferruccio's answer is good. Here's some example code:
TCHAR exepath[MAX_PATH+1];
if(0 == GetModuleFileName(0, exepath, MAX_PATH+1))
MessageBox(_T("Error!"));
MessageBox(exepath, _T("My executable name"));
A: I can confirm it works under win64/visual studio 2017/ MFC
TCHAR szFileName[MAX_PATH + 1];
GetModuleFileName(NULL, szFileName, MAX_PATH + 1);
auto exe = CString(szFileName);
exe contains full path to exe.
A: According to MSDN:
The global variable _pgmptr is automatically initialized to the full
path of the executable file, and can be used to retrieve the full path
name of an executable file.
A: I guess I like to make it a habit to add answers to really old questions. Let's do it again!
The very simplest way to get the current running .exe on a Windows system is to use the SDK defined global variable _pgmptr. However, it's considered "unsafe", so you use _get_pgmptr() instead.
So...
char *exe;
_get_pgmptr(&exe);
std::cout << "This executable is [" << exe << "]." << std::endl;
If you have a "wide" (UTF16) entrypoint, such as wmain or wWinMain, use this instead:
wchar_t* wexe;
_get_wpgmptr(&wexe);
std::wcout << L"This executable is [" << wexe << L"]." << std::endl;
If you really, really insist on using the simpler version, here's how to do it:
#define _CRT_SECURE_NO_WARNINGS 1
#include <iostream>
int main(int argc, char** argv)
{
std::cout << "This executable is [" << _pgmptr << "]." << std::endl;
}
Just don't get yourself pwned by hax0rs!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124886",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
}
|
Q: XMLDocument.Load(url) through a proxy I have a bit of code that basically reads an XML document using the XMLDocument.Load(uri) method which works fine, but doesn't work so well if the call is made through a proxy.
I was wondering if anyone knew of a way to make this call (or achieve the same effect) through a proxy?
A: You can't configure XMLDocument to use proxy. You can use WebRequest or WebClient class to load data via proxy and pass obtained response stream to XMLDocument
Also you can try to use XmlTextReader class. It allows you set network credentials. For details see:
Supplying Authentication Credentials to XmlResolver when Reading from a File
A: You need to use WebProxy and WebRequest to download the xml, then parse it.
A: This is the code that I ended up using:
WebProxy wp = new WebProxy(Settings.Default.ProxyAddress);
wp.Credentials = new NetworkCredential(Settings.Default.ProxyUsername, Settings.Default.ProxyPassword);
WebClient wc = new WebClient();
wc.Proxy = wp;
MemoryStream ms = new MemoryStream(wc.DownloadData(url));
XmlTextReader rdr = new XmlTextReader(ms);
return XDocument.Load(rdr);
A: Use lomaxx's answer but change
MemoryStream ms = new MemoryStream(wc.DownloadData(url));
XmlTextReader rdr = new XmlTextReader(url);
to
MemoryStream ms = new MemoryStream(wc.DownloadData(url));
XmlTextReader rdr = new XmlTextReader(ms);
A: Do you have to provide credentials to the proxy?
If so, this should help:
"Supplying Authentication Credentials to XmlResolver when Reading from a File"
http://msdn.microsoft.com/en-us/library/aa720674.aspx
Basically, you...
*
*Create an XmlTextReader using the URL
*Set the Credentials property of the reader's XmlResolver
*Create an XmlDocument instance and pass the reader to the Load method.
A: Actually rather than storing in settings for your app, you can use the Windows proxy config. I believe this is standard now so not having to configure proxy in each program i.e. look at Chrome's Settings for Proxy directs to Windows
IWebProxy wp = WebRequest.GetSystemWebProxy();
wp.Credentials = WebRequest.GetSystemWebProxy().Credentials;
WebClient wc = new WebClient();
wc.Proxy = wp;
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124932",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
}
|
Q: How do I make an Ajax.Autocompleter perform a request without typing? I'm using scriptaculous's Ajax.Autocompleter for a search with different filters.
http://github.com/madrobby/scriptaculous/wikis/ajax-autocompleter
The filters are requiring me to pass data into the autocompleter dynamically, which I've successfully learned to do from the following link.
http://www.simpltry.com/2007/01/30/ajaxautocompleter-dynamic-parameters/
Now, I have multiple filters and one search box. How do I get the autocompleter to make the request without typing into the input, but by clicking a new filter?
Here's a use case to clarify. The page loads, there are multiple filters (just links with onclicks), and one input field with the autocompleter attached. I type a query and the autocompleter request is performed. Then, I click on a different filter, and I'd like another request to be performed with the same query, but different filter.
Or more succinctly, how do I make the autocompleter perform the request when I want, instead of depending on typing to trigger it?
A: I also found that the activate() method worked great. Here is my sample code....
<script type="text/javascript">
/*<![CDATA[*/
var autocomp1 = new Ajax.Autocompleter("search", "AjaxResultsListPlaceholder", "ajaxServerSideSearchHandler.php", {
frequency: 1,
minChars: 10,
indicator: "AjaxWorkingPleaseWaitPlaceholder",
} );
/*]]>*/
</script>
<form id="theform">
<input type="text" id="search" name="search" value="" />
<input type="button" id="btn_search" name="btn_search" value="Search" onclick="autocomp1.activate();" />
<div id="AjaxWorkingPleaseWaitPlaceholder" style="display: none; border: 1px solid #ffaaaa;">
</div>
<div id="AjaxResultsListPlaceholder" style="display: none;; border: 1px solid #aaffaa;">
</div>
</form>
A: To answer my own question: fake a key press. It ensures that the request is made, and that the dropdown box becomes visible. Here's my function to fake the key press, which takes into account the differences in IE and Firefox.
function fakeKeyPress(input_id) {
var input = $(input_id);
if(input.fireEvent) {
// ie stuff
var evt = document.createEventObject();
evt.keyCode = 67;
$(input_id).fireEvent("onKeyDown", evt);
} else {
// firefox stuff
var evt = document.createEvent("KeyboardEvent");
evt.initKeyEvent('keydown', true, true, null, false, false, false, false, 27, 0);
var canceled = !$(input_id).dispatchEvent(evt);
}
}
A: Having looked at the Scriptaculous source to see what happens on keypress, I would suggest you try calling onObserverEvent().
var autoCompleter = new Ajax.Autocompleter(/* exercise for the reader */);
// Magic happens
autoCompleter.onObserverEvent();
A: var autoCompleter = new Ajax.Autocompleter(/* exercise for the reader */);
// Magic happens
autoCompleter.activate();
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124935",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: C# file read/write fileshare doesn't appear to work My question is based off of inheriting a great deal of legacy code that I can't do very much about. Basically, I have a device that will produce a block of data. A library which will call the device to create that block of data, for some reason I don't entirely understand and cannot change even if I wanted to, writes that block of data to disk.
This write is not instantaneous, but can take up to 90 seconds. In that time, the user wants to get a partial view of the data that's being produced, so I want to have a consumer thread which reads the data that the other library is writing to disk.
Before I even touch this legacy code, I want to mimic the problem using code I entirely control. I'm using C#, ostensibly because it provides a lot of the functionality I want.
In the producer class, I have this code creating a random block of data:
FileStream theFS = new FileStream(this.ScannerRawFileName,
FileMode.OpenOrCreate, FileAccess.Write, FileShare.Read);
//note that I need to be able to read this elsewhere...
BinaryWriter theBinaryWriter = new BinaryWriter(theFS);
int y, x;
for (y = 0; y < imheight; y++){
ushort[] theData= new ushort[imwidth];
for(x = 0; x < imwidth;x++){
theData[x] = (ushort)(2*y+4*x);
}
byte[] theNewArray = new byte[imwidth * 2];
Buffer.BlockCopy(theImage, 0, theNewArray, 0, imwidth * 2);
theBinaryWriter.Write(theNewArray);
Thread.Sleep(mScanThreadWait); //sleep for 50 milliseconds
Progress = (float)(y-1 >= 0 ? y-1 : 0) / (float)imheight;
}
theFS.Close();
So far, so good. This code works. The current version (using FileStream and BinaryWriter) appears to be equivalent (though slower, because of the copy) to using File.Open with the same options and a BinaryFormatter on the ushort[] being written to disk.
But then I add a consumer thread:
FileStream theFS;
if (!File.Exists(theFileName)) {
//do error handling
return;
}
else {
theFS = new FileStream(theFileName, FileMode.Open,
FileAccess.Read, FileShare.Read);
//very relaxed file opening
}
BinaryReader theReader = new BinaryReader(theFS);
//gotta do this copying in order to handle byte array swaps
//frustrating, but true.
byte[] theNewArray = theReader.ReadBytes(
(int)(imheight * imwidth * inBase.Progress) * 2);
ushort[] theData = new ushort[((int)(theNewArray.Length/2))];
Buffer.BlockCopy(theNewArray, 0, theData, 0, theNewArray.Length);
Now, it's possible that the declaration of theNewArray is broken, and will cause some kind of read overflow. However, this code never gets that far, because it always always always breaks on trying to open the new FileStream with a System.IO.IOException that states that another process has opened the file.
I'm setting the FileAccess and FileShare enumerations as stated in the FileStream documentation on MSDN, but it appears that I just can't do what I want to do (ie, write in one thread, read in another). I realize that this application is a bit unorthodox, but when I get the actual device involved, I'm going to have to do the same thing, but using MFC.
In any event, What am I forgetting? Is what I'm wanting to do possible, since it's specified as possible in the documentation?
Thanks!
mmr
A: Your consumer must specify FileShare.ReadWrite.
By trying to open the file as FileShare.Read in the consumer you are saying "I want to open the file and let others read it at the same time" ... since there is already a writer that call fails, you have to allow concurrent writes with the reader.
A: I haven't had time to test this but I think you may need to call the Flush method of the BinaryWriter
FileStream theFS = new FileStream(this.ScannerRawFileName,
FileMode.OpenOrCreate, FileAccess.Write, FileShare.Read);
//note that I need to be able to read this elsewhere...
BinaryWriter theBinaryWriter = new BinaryWriter(theFS);
int y, x;
for (y = 0; y < imheight; y++){
ushort[] theData= new ushort[imwidth];
for(x = 0; x < imwidth;x++){
theData[x] = (ushort)(2*y+4*x);
}
byte[] theNewArray = new byte[imwidth * 2];
Buffer.BlockCopy(theImage, 0, theNewArray, 0, imwidth * 2);
theBinaryWriter.Write(theNewArray);
Thread.Sleep(mScanThreadWait); //sleep for 50 milliseconds
Progress = (float)(y-1 >= 0 ? y-1 : 0) / (float)imheight;
theBinaryWriter.Flush();
}
theFS.Close();
Sorry I haven't had time to test this. I ran into an issue with a file I was creating that was similar to this (although not exact) and a missing "Flush" was the culprit.
A: I believe Chuck is right, but keep in mind The only reason this works at all is because the filesystem is smart enough to serialize your read/writes; you have no locking on the file resource - that's not a good thing :)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124946",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
}
|
Q: dTrace scripts and tools I've recently began using dTrace and have noticed just how awesome it is. Its the perfect tool for profiling without placing the burden on programmers to set up hundreds of probes in their applications.
I've found some nice one liner and sample scripts here and there, but I was wondering about what scripts, tools and links others might want to share.
BTW Anybody tried Chimes?
A: Here are some links I've found useful
A Powerpoint presentation about dTrace:
http://www.nbl.fi/~nbl97/solaris/dtrace/dtt_present.pdf
200+ useful scripts:
http://www.brendangregg.com/
A: I attended Theo Schlossnagle's Full Stack Introspection Crash Course talk at OSCON this year. In that presentation he gives several examples of using the D-Trace language and at the above link there are some additional utilities.
A: It's worth noting that because of the differences in Apple's and Sun's implementations, dtrace scripts from Solaris may not (likely won't) work on Leopard, and vice-versa. I'm not sure about FreeBSD's version.
The main problem is a different set of probes made available by the OS. Sometimes the probes will be provided under a different name. Sometimes they'll be more or less specific from one OS to another. Just a gotcha in case you come across a script that, for some reason, won't work.
A: Unfortunately dTrace is only implemented in/for Solaris OS. People from sun are recommend me to port all my php applications to Solaris, and "dtrace" them. After optimizing to again port them on my previous OS.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124952",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: glob() - sort array of files by last modified datetime stamp I'm trying to display an array of files in order of date (last modified).
I have done this buy looping through the array and sorting it into another array, but is there an easier (more efficient) way to do this?
A: Since PHP 7.4 the best solution is to use custom sort with arrow function:
usort($myarray, fn($a, $b) => filemtime($a) - filemtime($b));
You can also use the spaceship operator which works for all kinds of comparisons and not just on integer ones. It won't make any difference in this case, but it's a good practice to use it in all sorting operations.
usort($myarray, fn($a, $b) => filemtime($a) <=> filemtime($b));
If you want to sort in reversed order you can negate the condition:
usort($myarray, fn($a, $b) => -(filemtime($a) - filemtime($b)));
// or
usort($myarray, fn($a, $b) => -(filemtime($a) <=> filemtime($b)));
Note that calling filemtime() repetitively is bad for performance. Please apply memoization to improve the performance.
A: This can be done with a better performance. The usort() in the accepted answer will call filemtime() a lot of times. PHP uses quicksort algorithm which has an average performance of 1.39*n*lg(n). The algorithm calls filemtime() twice per comparison, so we will have about 28 calls for 10 directory entries, 556 calls for 100 entries, 8340 calls for 1000 entries etc. The following piece of code works good for me and has a great performance:
exec ( stripos ( PHP_OS, 'WIN' ) === 0 ? 'dir /B /O-D *.*' : 'ls -td1 *.*' , $myarray );
A: Year 2020 - If you care about performance, consider not to use glob()!
If you want scan a lot of files in a folder without special wildcards,
rulesets, or any exec(),
I suggest scandir(), or readdir().
glob() is a lot slower, on Windows it's even slower.
quote by: aalfiann
why glob seems slower in this benchmark? because glob will do
recursive into sub dir if you write like this "mydir/*".
just make sure there is no any sub dir to make glob fast.
"mydir/*.jpg" is faster because glob will not try to get files inside
sub dir.
benchmark: glob() vs scandir()
http://www.spudsdesign.com/benchmark/index.php?t=dir2 (external)
discussion: readdir() vs scandir()
readdir vs scandir (stackoverflow)
readdir() or scandir() combined with these, for pretty neat performance.
PHP 7.4
usort( $myarray, function( $a, $b ) { return filemtime($a) - filemtime($b); } );
source: https://stackoverflow.com/a/60476123/3626361
PHP 5.3.0 and newer
usort($myarray, fn($a, $b) => filemtime($a) - filemtime($b));
source: https://stackoverflow.com/a/35925596/3626361
if you wanna go even deeper the rabbit hole:
The DirectoryIterator
https://www.php.net/manual/en/class.directoryiterator.php
https://www.php.net/manual/en/directoryiterator.construct.php (read the comments!)
http://paulyg.github.io/blog/2014/06/03/directoryiterator-vs-filesystemiterator.html
Difference between DirectoryIterator and FileSystemIterator
Last but not least, my Demo!
<?php
function files_attachment_list($id, $sort_by_date = false, $allowed_extensions = ['png', 'jpg', 'jpeg', 'gif', 'doc', 'docx', 'pdf', 'zip', 'rar', '7z'])
{
if (empty($id) or !is_dir(sprintf('files/%s/', $id))) {
return false;
}
$out = [];
foreach (new DirectoryIterator(sprintf('files/%s/', $id)) as $file) {
if ($file->isFile() == false || !in_array($file->getExtension(), $allowed_extensions)) {
continue;
}
$datetime = new DateTime();
$datetime->setTimestamp($file->getMTime());
$out[] = [
'title' => $file->getFilename(),
'size' => human_filesize($file->getSize()),
'modified' => $datetime->format('Y-m-d H:i:s'),
'extension' => $file->getExtension(),
'url' => $file->getPathname()
];
}
$sort_by_date && usort($out, function ($a, $b) {
return $a['modified'] > $b['modified'];
});
return $out;
}
function human_filesize($bytes, $decimals = 2)
{
$sz = 'BKMGTP';
$factor = floor((strlen($bytes) - 1) / 3);
return sprintf("%.{$decimals}f", $bytes / pow(1024, $factor)) . @$sz[$factor];
}
// returns a file info array from path like '/files/123/*.extensions'
// extensions = 'png', 'jpg', 'jpeg', 'gif', 'doc', 'docx', 'pdf', 'zip', 'rar', '7z'
// OS specific sorting
print_r( files_attachment_list(123) );
// returns a file info array from the folder '/files/456/*.extensions'
// extensions = 'txt', 'zip'
// sorting by modified date (newest first)
print_r( files_attachment_list(456, true, ['txt','zip']) );
A: <?php
$items = glob('*', GLOB_NOSORT);
array_multisort(array_map('filemtime', $items), SORT_NUMERIC, SORT_DESC, $items);
A: This solution is same as accepted answer, updated with anonymous function1:
$myarray = glob("*.*");
usort( $myarray, function( $a, $b ) { return filemtime($a) - filemtime($b); } );
1 Anonymous functions have been introduced in PHP in 2010. Original answer is dated 2008.
A:
Warning create_function() has been DEPRECATED as of PHP 7.2.0. Relying on this function is highly discouraged.
For the sake of posterity, in case the forum post linked in the accepted answer is lost or unclear to some, the relevant code needed is:
<?php
$myarray = glob("*.*");
usort($myarray, create_function('$a,$b', 'return filemtime($a) - filemtime($b);'));
?>
Tested this on my system and verified it does sort by file mtime as desired. I used a similar approach (written in Python) for determining the last updated files on my website as well.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124958",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "61"
}
|
Q: Create Word Document using PHP in Linux Whats the available solutions for PHP to create word document in linux environment?
A: OpenTBS can create DOCX dynamic documents in PHP using the technique of templates.
No temporary files needed, no command lines, all in PHP.
It can add or delete pictures. The created document can be produced as a HTML download, a file saved on the server, or as binary contents in PHP.
It can also merge OpenDocument files (ODT, ODS, ODF, ...)
http://www.tinybutstrong.com/opentbs.php
A: Following on Ivan Krechetov's answer, here is a function that does mail merge (actually just simple text replace) for docx and odt, without the need for an extra library.
function mailMerge($templateFile, $newFile, $row)
{
if (!copy($templateFile, $newFile)) // make a duplicate so we dont overwrite the template
return false; // could not duplicate template
$zip = new ZipArchive();
if ($zip->open($newFile, ZIPARCHIVE::CHECKCONS) !== TRUE)
return false; // probably not a docx file
$file = substr($templateFile, -4) == '.odt' ? 'content.xml' : 'word/document.xml';
$data = $zip->getFromName($file);
foreach ($row as $key => $value)
$data = str_replace($key, $value, $data);
$zip->deleteName($file);
$zip->addFromString($file, $data);
$zip->close();
return true;
}
This will replace
[Person Name] with Mina
and [Person Last Name] with Mooo:
$replacements = array('[Person Name]' => 'Mina', '[Person Last Name]' => 'Mooo');
$newFile = tempnam_sfx(sys_get_temp_dir(), '.dat');
$templateName = 'personinfo.docx';
if (mailMerge($templateName, $newFile, $replacements))
{
header('Content-type: application/msword');
header('Content-Disposition: attachment; filename=' . $templateName);
header('Accept-Ranges: bytes');
header('Content-Length: '. filesize($file));
readfile($newFile);
unlink($newFile);
}
Beware that this function can corrupt the document if the string to replace is too general. Try to use verbose replacement strings like [Person Name].
A: real Word documents
If you need to produce "real" Word documents you need a Windows-based web server and COM automation. I highly recommend Joel's article on this subject.
fake HTTP headers for tricking Word into opening raw HTML
A rather common (but unreliable) alternative is:
header("Content-type: application/vnd.ms-word");
header("Content-Disposition: attachment; filename=document_name.doc");
echo "<html>";
echo "<meta http-equiv=\"Content-Type\" content=\"text/html; charset=Windows-1252\">";
echo "<body>";
echo "<b>Fake word document</b>";
echo "</body>";
echo "</html>"
Make sure you don't use external stylesheets. Everything should be in the same file.
Note that this does not send an actual Word document. It merely tricks browsers into offering it as download and defaulting to a .doc file extension. Older versions of Word may often open this without any warning/security message, and just import the raw HTML into Word. PHP sending sending that misleading Content-Type header along does not constitute a real file format conversion.
A: PHPWord can generate Word documents in docx format. It can also use an existing .docx file as a template - template variables can be added to the document in the format ${varname}
It has an LGPL license and the examples that came with the code worked nicely for me.
A: The Apache project has a library called POI which can be used to generate MS Office files. It is a Java library but the advantage is that it can run on Linux with no trouble. This library has its limitations but it may do the job for you, and it's probably simpler to use than trying to run Word.
Another option would be OpenOffice but I can't exactly recommend it since I've never used it.
A: <?php
function fWriteFile($sFileName,$sFileContent="No Data",$ROOT)
{
$word = new COM("word.application") or die("Unable to instantiate Word");
//bring it to front
$word->Visible = 1;
//open an empty document
$word->Documents->Add();
//do some weird stuff
$word->Selection->TypeText($sFileContent);
$word->Documents[1]->SaveAs($ROOT."/".$sFileName.".doc");
//closing word
$word->Quit();
//free the object
$word = null;
return $sFileName;
}
?>
<?php
$PATH_ROOT=dirname(__FILE__);
$Return ="<table>";
$Return .="<tr><td>Row[0]</td></tr>";
$Return .="<tr><td>Row[1]</td></tr>";
$sReturn .="</table>";
fWriteFile("test",$Return,$PATH_ROOT);
?>
A: OpenOffice templates + OOo command line interface.
*
*Create manually an ODT template with placeholders, like [%value-to-replace%]
*When instantiating the template with real data in PHP, unzip the template ODT (it's a zipped XML), and run against the XML the textual replace of the placeholders with the actual values.
*Zip the ODT back
*Run the conversion ODT -> DOC via OpenOffice command line interface.
There are tools and libraries available to ease each of those steps.
May be that helps.
A: By far the easiest way to create DOC files on Linux, using PHP is with the Zend Framework component phpLiveDocx.
From the project web site:
"phpLiveDocx allows developers to generate documents by combining structured data from PHP with a template, created in a word processor. The resulting document can be saved as a PDF, DOCX, DOC or RTF file. The concept is the same as with mail-merge."
A: There are 2 options to create quality word documents. Use COM to communicate with word (this requires a windows php server at least). Use openoffice and it's API to create and save documents in word format.
A: Take a look at PHP COM documents (The comments are helpful) http://us3.php.net/com
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124959",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "60"
}
|
Q: Existence map in C++ I want something like an std::map, but I only want to see if the item exists or not, I don't actually need a key AND a value. What should I use?
A: If you want the same type of behavior as std::map, then you want std::set.
If you are mixing insert/delete and query operations, then std::set is probably the best choice. However, if you can populate the set first and then follow it with the queries, it might be worth looking at using std::vector, sorting it, and then using a binary search to check for existence in the vector.
A: If you really need existence only, and not even an order, you need an unordered_set. It is available from your favorite C++0x vendor or boost.org.
A: Looks like you need a std::set.
A: If your data is numerical you can use an std::vector which is optimized for space:
D:\Temp>type vectorbool.cpp
#include <iostream>
#include <vector>
using namespace std;
int main() {
vector<bool> vb(10);
vb[5] = true;
for (vector<bool>::const_iterator ci = vb.begin(); ci != vb.end(); ++ci) {
cout << *ci << endl;
}
}
D:\Temp>cl /nologo /W4 /EHsc vectorbool.cpp
vectorbool.cpp
D:\Temp>vectorbool.exe
0
0
0
0
0
1
0
0
0
0
A: You should probably look at stl::set for what you need. A stl::bitset is another option.
It will depend on how you need to use the information that would define which of these is better. A set is a sorted data structure, insertion, find and deletion take O(LOG N) time. But if you need to iterate over all the values that you have marked for "existence" then the set is the way to go.
If you only need to mark and lookup the fact that something is a member of a set then the bitset might be better for you. Insertion, find and delete only takes O(1), but you can only collect int values. Iterating over all the marked values will take O(N) as you need to go through the whole set to find the members that are set to true. You can use it in concert with a stl::map to map from the values you have to the numerical values the bitset needs.
Look at the operations that you need to perform with the values in your set and you should be able to choose the appropriate data structure
A: You can keep using std::map for the desired purpose.
To check if a particular item (of key type) exists in the map or not, you can use following code:
if (mapObj.count(item) != 0)
{
// item exists
}
As answered earlier, std::set will do the job as well. Interestingly both, set and map are represented as Trees internally.
A: If the key IS the value, then you might also consider a "bloom filter" rather than a set.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124966",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Getting logged on user's name with or without domain in Windows Is there a direct API to get the currently logged in user's name with the domain? So, it would return something like "domain\user" when a machine is on the domain, but at the same time it would return "user" when the machine is not on the domain? If there's not, what's the best way to get this information?
I noticed there's a LookupAccountName function - would that be the right direction to take?
A: Try GetUserNameEx(). It supports various name formats.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124967",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Lower than low level common bsd sockets How do you do low low level sockets in C, example: actually sending a SYN.
A: Raw sockets are your friend.
There have been some links to useful information on this question.
Also consult Chapter 25 "Raw sockets" of Steven's "Unix Network Programming"
If you're attempting cross platform code you may find libpcap a useful alternative.
A: You want to use raw sockets. In *nix, you need to be root to be able to create raw sockets. I'm not sure if it's possible in Windows.
A: What you actually want is a raw socket ... you can completely control the headers and flags with the raw socket interface, but programming them is much more challenging. Here's a great tutorial to get you started: http://www.cs.binghamton.edu/~steflik/cs455/rawip.txt.
A: I suspect the nmap sources would be an excellent place to look.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124968",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Windows Forms textbox that has line numbers? I'm looking for a free winforms component for an application I'm writing. I basicly need a textbox that contains line numbers in a side column. Being able to tabulate data within it would be a major plus too.
Does anyone know of a premade component that could do this?
A: Referencing Wayne's post, here is the relevant code. It is using GDI to draw line numbers next to the text box.
Public Sub New()
MyBase.New()
'This call is required by the Windows Form Designer.
InitializeComponent()
'Add any initialization after the InitializeComponent() call
SetStyle(ControlStyles.UserPaint, True)
SetStyle(ControlStyles.AllPaintingInWmPaint, True)
SetStyle(ControlStyles.DoubleBuffer, True)
SetStyle(ControlStyles.ResizeRedraw, True)
End Sub
Private Sub RichTextBox1_SelectionChanged(ByVal sender As Object, ByVal e As System.EventArgs) Handles RichTextBox1.SelectionChanged
FindLine()
Invalidate()
End Sub
Private Sub FindLine()
Dim intChar As Integer
intChar = RichTextBox1.GetCharIndexFromPosition(New Point(0, 0))
intLine = RichTextBox1.GetLineFromCharIndex(intChar)
End Sub
Private Sub DrawLines(ByVal g As Graphics, ByVal intLine As Integer)
Dim intCounter As Integer, intY As Integer
g.Clear(Color.Black)
intCounter = intLine + 1
intY = 2
Do
g.DrawString(intCounter.ToString(), Font, Brushes.White, 3, intY)
intCounter += 1
intY += Font.Height + 1
If intY > ClientRectangle.Height - 15 Then Exit Do
Loop
End Sub
Protected Overrides Sub OnPaint(ByVal e As System.Windows.Forms.PaintEventArgs)
DrawLines(e.Graphics, intLine)
End Sub
Private Sub RichTextBox1_VScroll(ByVal sender As Object, ByVal e As System.EventArgs) Handles RichTextBox1.VScroll
FindLine()
Invalidate()
End Sub
Private Sub RichTextBox1_UserScroll() Handles RichTextBox1.UserScroll
FindLine()
Invalidate()
End Sub
The RichTextBox is overridden like this:
Public Class UserControl1
Inherits System.Windows.Forms.RichTextBox
Public Event UserScroll()
Protected Overrides Sub WndProc(ByRef m As System.Windows.Forms.Message)
If m.Msg = &H115 Then
RaiseEvent UserScroll()
End If
MyBase.WndProc(m)
End Sub
End Class
(Code by divil on the xtremedotnettalk.com forum.)
A: Take a look at the SharpDevelop C# compiler/IDE source code. They have a sophisticated text box with line numbers. You could look at the source, figure out what they're doing, and then implement it yourself.
Here's a sample of what I'm referencing:
(source: sharpdevelop.net)
A: There is a project with code available at http://www.xtremedotnettalk.com/showthread.php?s=&threadid=49661&highlight=RichTextBox.
You can log into the site to download the zip file with the user/pass: bugmenot/bugmenot
A: There is a source code editing component in CodePlex for .net,
http://www.codeplex.com/ScintillaNET
A: Here's some C# code to do this. It's based on the code from xtremedotnettalk.com referenced by Wayne. I've made some changes to make it actually display the editor text, which the original didn't do. But, to be fair, the original code author did mention it needed work.
Here's the code (NumberedTextBox.cs)
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Drawing;
using System.Data;
using System.Text;
using System.Windows.Forms;
namespace NumberedTextBoxLib {
public partial class NumberedTextBox : UserControl {
private int lineIndex = 0;
new public String Text {
get {
return editBox.Text;
}
set {
editBox.Text = value;
}
}
public NumberedTextBox() {
InitializeComponent();
SetStyle(ControlStyles.UserPaint, true);
SetStyle(ControlStyles.AllPaintingInWmPaint, true);
SetStyle(ControlStyles.OptimizedDoubleBuffer, true);
SetStyle(ControlStyles.ResizeRedraw, true);
editBox.SelectionChanged += new EventHandler(selectionChanged);
editBox.VScroll += new EventHandler(OnVScroll);
}
private void selectionChanged(object sender, EventArgs args) {
FindLine();
Invalidate();
}
private void FindLine() {
int charIndex = editBox.GetCharIndexFromPosition(new Point(0, 0));
lineIndex = editBox.GetLineFromCharIndex(charIndex);
}
private void DrawLines(Graphics g) {
int counter, y;
g.Clear(BackColor);
counter = lineIndex + 1;
y = 2;
int max = 0;
while (y < ClientRectangle.Height - 15) {
SizeF size = g.MeasureString(counter.ToString(), Font);
g.DrawString(counter.ToString(), Font, new SolidBrush(ForeColor), new Point(3, y));
counter++;
y += (int)size.Height;
if (max < size.Width) {
max = (int) size.Width;
}
}
max += 6;
editBox.Location = new Point(max, 0);
editBox.Size = new Size(ClientRectangle.Width - max, ClientRectangle.Height);
}
protected override void OnPaint(PaintEventArgs e) {
DrawLines(e.Graphics);
e.Graphics.TranslateTransform(50, 0);
editBox.Invalidate();
base.OnPaint(e);
}
///Redraw the numbers when the editor is scrolled vertically
private void OnVScroll(object sender, EventArgs e) {
FindLine();
Invalidate();
}
}
}
And here is the Visual Studio designer code (NumberedTextBox.Designer.cs)
namespace NumberedTextBoxLib {
partial class NumberedTextBox {
/// Required designer variable.
private System.ComponentModel.IContainer components = null;
/// Clean up any resources being used.
protected override void Dispose(bool disposing) {
if (disposing && (components != null)) {
components.Dispose();
}
base.Dispose(disposing);
}
/// Required method for Designer support - do not modify
/// the contents of this method with the code editor.
private void InitializeComponent() {
this.editBox = new System.Windows.Forms.RichTextBox();
this.SuspendLayout();
//
// editBox
//
this.editBox.AcceptsTab = true;
this.editBox.Anchor = ((System.Windows.Forms.AnchorStyles) ((((System.Windows.Forms.AnchorStyles.Top | System.Windows.Forms.AnchorStyles.Bottom)
| System.Windows.Forms.AnchorStyles.Left)
| System.Windows.Forms.AnchorStyles.Right)));
this.editBox.Location = new System.Drawing.Point(27, 3);
this.editBox.Name = "editBox";
this.editBox.Size = new System.Drawing.Size(122, 117);
this.editBox.TabIndex = 0;
this.editBox.Text = "";
this.editBox.WordWrap = false;
//
// NumberedTextBox
//
this.Controls.Add(this.editBox);
this.Name = "NumberedTextBox";
this.Size = new System.Drawing.Size(152, 123);
this.ResumeLayout(false);
}
private System.Windows.Forms.RichTextBox editBox;
}
}
A: I guess it depends on the Font size, I used "Courier New, 9pt, style=bold"
*
*Added fix for smooth scrolling, (not line by line)
*Added fix for big files, when scroll value is over 16bit.
NumberedTextBox.cs
public partial class NumberedTextBox : UserControl
{
private int _lines = 0;
[Browsable(true),
EditorAttribute("System.ComponentModel.Design.MultilineStringEditor, System.Design","System.Drawing.Design.UITypeEditor")]
new public String Text
{
get
{
return editBox.Text;
}
set
{
editBox.Text = value;
Invalidate();
}
}
private Color _lineNumberColor = Color.LightSeaGreen;
[Browsable(true), DefaultValue(typeof(Color), "LightSeaGreen")]
public Color LineNumberColor {
get{
return _lineNumberColor;
}
set
{
_lineNumberColor = value;
Invalidate();
}
}
public NumberedTextBox()
{
InitializeComponent();
SetStyle(ControlStyles.UserPaint, true);
SetStyle(ControlStyles.AllPaintingInWmPaint, true);
SetStyle(ControlStyles.OptimizedDoubleBuffer, true);
SetStyle(ControlStyles.ResizeRedraw, true);
editBox.SelectionChanged += new EventHandler(selectionChanged);
editBox.VScroll += new EventHandler(OnVScroll);
}
private void selectionChanged(object sender, EventArgs args)
{
Invalidate();
}
private void DrawLines(Graphics g)
{
g.Clear(BackColor);
int y = - editBox.ScrollPos.Y;
for (var i = 1; i < _lines + 1; i++)
{
var size = g.MeasureString(i.ToString(), Font);
g.DrawString(i.ToString(), Font, new SolidBrush(LineNumberColor), new Point(3, y));
y += Font.Height + 2;
}
var max = (int)g.MeasureString((_lines + 1).ToString(), Font).Width + 6;
editBox.Location = new Point(max, 0);
editBox.Size = new Size(ClientRectangle.Width - max, ClientRectangle.Height);
}
protected override void OnPaint(PaintEventArgs e)
{
_lines = editBox.Lines.Count();
DrawLines(e.Graphics);
e.Graphics.TranslateTransform(50, 0);
editBox.Invalidate();
base.OnPaint(e);
}
private void OnVScroll(object sender, EventArgs e)
{
Invalidate();
}
public void Select(int start, int length)
{
editBox.Select(start, length);
}
public void ScrollToCaret()
{
editBox.ScrollToCaret();
}
private void editBox_TextChanged(object sender, EventArgs e)
{
Invalidate();
}
}
public class RichTextBoxEx : System.Windows.Forms.RichTextBox
{
private double _Yfactor = 1.0d;
[DllImport("user32.dll")]
static extern IntPtr SendMessage(IntPtr hWnd, Int32 wMsg, Int32 wParam, ref Point lParam);
private enum WindowsMessages
{
WM_USER = 0x400,
EM_GETSCROLLPOS = WM_USER + 221,
EM_SETSCROLLPOS = WM_USER + 222
}
public Point ScrollPos
{
get
{
var scrollPoint = new Point();
SendMessage(this.Handle, (int)WindowsMessages.EM_GETSCROLLPOS, 0, ref scrollPoint);
return scrollPoint;
}
set
{
var original = value;
if (original.Y < 0)
original.Y = 0;
if (original.X < 0)
original.X = 0;
var factored = value;
factored.Y = (int)((double)original.Y * _Yfactor);
var result = value;
SendMessage(this.Handle, (int)WindowsMessages.EM_SETSCROLLPOS, 0, ref factored);
SendMessage(this.Handle, (int)WindowsMessages.EM_GETSCROLLPOS, 0, ref result);
var loopcount = 0;
var maxloop = 100;
while (result.Y != original.Y)
{
// Adjust the input.
if (result.Y > original.Y)
factored.Y -= (result.Y - original.Y) / 2 - 1;
else if (result.Y < original.Y)
factored.Y += (original.Y - result.Y) / 2 + 1;
// test the new input.
SendMessage(this.Handle, (int)WindowsMessages.EM_SETSCROLLPOS, 0, ref factored);
SendMessage(this.Handle, (int)WindowsMessages.EM_GETSCROLLPOS, 0, ref result);
// save new factor, test for exit.
loopcount++;
if (loopcount >= maxloop || result.Y == original.Y)
{
_Yfactor = (double)factored.Y / (double)original.Y;
break;
}
}
}
}
}
NumberedTextBox.Designer.cs
partial class NumberedTextBox
{
/// <summary>
/// Required designer variable.
/// </summary>
private System.ComponentModel.IContainer components = null;
/// <summary>
/// Clean up any resources being used.
/// </summary>
/// <param name="disposing">true if managed resources should be disposed; otherwise, false.</param>
protected override void Dispose(bool disposing)
{
if (disposing && (components != null))
{
components.Dispose();
}
base.Dispose(disposing);
}
#region Component Designer generated code
/// <summary>
/// Required method for Designer support - do not modify
/// the contents of this method with the code editor.
/// </summary>
private void InitializeComponent()
{
this.editBox = new WebTools.Controls.RichTextBoxEx();
this.SuspendLayout();
//
// editBox
//
this.editBox.AcceptsTab = true;
this.editBox.Anchor = ((System.Windows.Forms.AnchorStyles)((((System.Windows.Forms.AnchorStyles.Top | System.Windows.Forms.AnchorStyles.Bottom)
| System.Windows.Forms.AnchorStyles.Left)
| System.Windows.Forms.AnchorStyles.Right)));
this.editBox.BorderStyle = System.Windows.Forms.BorderStyle.None;
this.editBox.Location = new System.Drawing.Point(27, 3);
this.editBox.Name = "editBox";
this.editBox.ScrollPos = new System.Drawing.Point(0, 0);
this.editBox.Size = new System.Drawing.Size(120, 115);
this.editBox.TabIndex = 0;
this.editBox.Text = "";
this.editBox.WordWrap = false;
this.editBox.TextChanged += new System.EventHandler(this.editBox_TextChanged);
//
// NumberedTextBox
//
this.BorderStyle = System.Windows.Forms.BorderStyle.FixedSingle;
this.Controls.Add(this.editBox);
this.Name = "NumberedTextBox";
this.Size = new System.Drawing.Size(150, 121);
this.ResumeLayout(false);
}
private RichTextBoxEx editBox;
#endregion
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124975",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: Switching to a Standing Desk At work I have a standard desk (4 legs, flat surface, you get the picture). For a while now I've been thinking about converting to a standing desk. What would be the best way to go about this on a limited budget? Are there some good laptop/keyboard stands I could place on my existing desk? Which ones are the best?
I'm trying to avoid requesting a whole new desk, and keeping things as simple as possible.
A: Talk to an occupational therapist and get their advice because you'll be drastically changing the way you posture yourself for hours at a time.
Agencies that assist people with disabilities and their carers (if you're in Australia, look up the Independent Living Centre in your capital city) would be a good start. You'll be able to test out a variety of models if they have a showroom and get advice from a medical professional not a furniture salesman.
A: This has been featured in a couple of blogs that are likely near and dear to most SOers:
*
*Lifehacker: Coolest Workspace Contest: DIY Startup Desk
*Coding Horror: Computer Workstation Ergonomics
I think there are clearly some benefits to standing for a portion of the day. Even better would be a treadmill desk (watch the "Exercise boosts brain power" film, and pay attention about 3:09 in).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125016",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Beginner looking for beautiful and instructional Python code As a complete beginner with no programming experience, I am trying to find beautiful Python code to study and play with. Please answer by pointing to a website, a book or some software project.
I have the following criterias:
*
*complete code listings (working, hackable code)
*beautiful code (highly readable, simple but effective)
*instructional for the beginner (yes, hand-holding is needed)
I've tried learning how to program for too long now, never gotten to the point where the rubber hits the road. My main agenda is best spelled out by Nat Friedman's "How to become a hacker".
I'm aware of O'Reilly's "Beautiful Code", but think of it as too advanced and confusing for a beginner.
A: Read the Python libraries themselves. They're working, hackable, elegant, and instructional. Some is simple, some is complex.
Best of all, you got it when you downloaded Python itself. It's in your Python library directory. Nothing more to do except start poking around.
A: Just do it.
Seriously, you're never going to learn to be a good programmer until you write some programs. First you'll write bad programs, then you'll fix them, then you'll write better ones, etc...
If you aren't insatiably motivated to try coding, then maybe it isn't for you. One way to get motivated is to get a job that requires you to code... for me, there's nothing like having my salary and pride on the line to get me working :)
A: The Python project itself maintains a nice list of beginner's guides.
A: Beautiful is so hard to define, there's no real answer to this question. Your best advice to follow what Nat says in the post you linked:
*
*Download the source code to the program you want to change
*Untar it on your hard drive
*Get it to build and run
*Open the source code in an editor
*Find the part of the code that you need to change to make the program do what you want it to do
*Make the changes you need to make to the code and test it to make sure it works
*Run the diff -u command and email the output to the mailing list
There is no point looking for beautiful code. Just look at and fix bugs in projects that you use (Django & Twisted might be good candidates).
A: Buy Programming Collective Intelligence. Great book of interesting AI algorithms based on mining data and all of the examples are in very easy to read Python.
The other great book is Text Processing in Python
A: I've seen How to Think Like a Computer Scientist recommended in many blogs.
A: I personally think that reading good code won't work until you have a firm understanding of the language, especially of its idioms. First, I recommend the basic Wikibook "Non-Programmer's Tutorial for Python" to start out. If most of that makes sense, you have a good understanding of the basics already.
After that, I recommend Dive into Python. You'll see a lot of other people recommending this book, because it's comprehensive and free. You'll learn a lot of language specific idioms in Dive into Python, especially in the first few chapters. As you're reading it, try to do basic programs using the techniques Mark Pilgrim shows.
Dive into Python gets into specific modules later in the book. That will probably get a little boring, and when it does, you might want to look at code. I don't feel qualified to rank the code used by these, but Django and Deluge are both bigger projects that will show you the organization of large programs. Though they will probably be overwhelming unless you take the time to really attack them one piece at a time and get a firm understanding.
A: I've learned quite a bit of beautiful and useful Python from O'Reilly's Python Cookbook. http://oreilly.com/catalog/9780596001674/
I've also learned much from ActiveState's Python Recipe's web page. http://code.activestate.com/recipes/langs/python/
A: I'd recommend you review Exaile music player for linux. It includes a lot of practically useful things like plugins, lambda, decorators, settings manager, gui (using GTK+) and much more.
Exaile source code is not an ideal but will give you enough helpful information and basic Python coding concepts.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125019",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
}
|
Q: Install PHP on XP / IIS 5.1? I am trying to install PHP onto my development box (XP SP3 / IIS 5.1) I've got PHP 5.2.6 stable downloaded (the MSI installer package) and I am getting an error "Cannot find httpd.conf". After that the install seems to breeze by quickly (more quickly than I would have expected) and when I try to execute a simple PHP script from my localhost test directory that I created, I get a slew of missing DLL errors. I have seen posts out there which indicate that its possible and has been done. I dont see any bug reports for this MSI at PHP.NET support. Any ideas?
A: Not sure if you already have this but I use WAMP from http://www.wampserver.com/en
It's easy and simple to set up, it has an icon in the system tray to show that its active and you can make it go online or available to the outside by clicking the icon and setting it. I used this when I was first learning PHP since it has everything in one, no need to setup any other service like IIS.
A: Probably the installer didn't configure your server to use PHP properly. Check out Microsoft's page on enabling PHP on IIS or alternatively switch to Apache if that's a viable option.
A: I'll see if I can remember it correctly:
*
*Unzip PHP zip file into c:\Program Files\php (or run the installer)
*Copy php5ts.dll into c:\windows\system32
*Copy php.ini.dist into c:\windows and rename it to php.ini
*Edit c:\windows\php.ini and look for extension dir - make it point to c:\Program Files\php\extensions (or wherever you put it)
*This is where my memory gets fuzzy: Edit your IIS application settings, add a script map for .php files, and set the executable to php5ts.dll
*Profit!?!??!?!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125022",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Sending SVN commits to an RSS feed So my favourite web tool, Subtlety, was recently discontinued, which means that I no longer have easy access to the commit logs of various SVN projects that I follow. Are there any other tools that easily pump out an RSS feed of commits for a public SVN repo?
A: I was going to suggest Trac as well, until I realized you probably don't have administrative control over the repositories in question. Perhaps this apparent solution will work for you?
http://svnfeed.com/
It seems to work well for the one repository I tried it on, and it's surprisingly fast.
A: SvnFeed
Also check out CommitMonitor for windows, which features really slick diff support
A: I've just tried sventon with good results. It's a web based SVN viewer, reasonable in what it does, but provides a good RSS feed at whatever level you want to subscribe to.
A: Atlassian Fisheye ( http://www.atlassian.com/software/fisheye/ ) allows you to get commit notification on email as well as RSS (and as a bonus, you can select which directory/file to subscribe to, and only get notified of those file/dir changes).
A: Another tool would be http://sourceforge.net/projects/svn2rss/
A: Try Trac. Besides feeds you can browse the repository and it has a nice Wiki.
A: I like Trac, though it may be overkill for what you need. WebSVN is also a nice tool for browsing repositories over the web. Both of these provide RSS feeds for the log. You can also subscribe to the log for just a particular branch, etc.
A: I was using Sublety as well. It was quite a bummer when it went away. I ended up rolling my own solution. It is .NET based and requires the use of the post-commit hook script. It's free, and I've provided both binaries and source code.
Subversion C# RSS Feed Hook Script
Live repository rss feed with nice xsl template
A: There's subveRSSed, which you just drop into your post-commit action.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125028",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
}
|
Q: What is the easiest, most concise way to make selected attributes in an instance be readonly? In Python, I want to make selected instance attributes of a class be readonly to code outside of the class. I want there to be no way outside code can alter the attribute, except indirectly by invoking methods on the instance. I want the syntax to be concise. What is the best way? (I give my current best answer below...)
A: You should use the @property decorator.
>>> class a(object):
... def __init__(self, x):
... self.x = x
... @property
... def xval(self):
... return self.x
...
>>> b = a(5)
>>> b.xval
5
>>> b.xval = 6
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: can't set attribute
A: class C(object):
def __init__(self):
self.fullaccess = 0
self.__readonly = 22 # almost invisible to outside code...
# define a publicly visible, read-only version of '__readonly':
readonly = property(lambda self: self.__readonly)
def inc_readonly( self ):
self.__readonly += 1
c=C()
# prove regular attribute is RW...
print "c.fullaccess = %s" % c.fullaccess
c.fullaccess = 1234
print "c.fullaccess = %s" % c.fullaccess
# prove 'readonly' is a read-only attribute
print "c.readonly = %s" % c.readonly
try:
c.readonly = 3
except AttributeError:
print "Can't change c.readonly"
print "c.readonly = %s" % c.readonly
# change 'readonly' indirectly...
c.inc_readonly()
print "c.readonly = %s" % c.readonly
This outputs:
$ python ./p.py
c.fullaccess = 0
c.fullaccess = 1234
c.readonly = 22
Can't change c.readonly
c.readonly = 22
c.readonly = 23
My fingers itch to be able to say
@readonly
self.readonly = 22
i.e., use a decorator on an attribute. It would be so clean...
A: Here's how:
class whatever(object):
def __init__(self, a, b, c, ...):
self.__foobar = 1
self.__blahblah = 2
foobar = property(lambda self: self.__foobar)
blahblah = property(lambda self: self.__blahblah)
(Assuming foobar and blahblah are the attributes you want to be read-only.) Prepending two underscores to an attribute name effectively hides it from outside the class, so the internal versions won't be accessible from the outside. This only works for new-style classes inheriting from object since it depends on property.
On the other hand... this is a pretty silly thing to do. Keeping variables private seems to be an obsession that comes from C++ and Java. Your users should use the public interface to your class because it's well-designed, not because you force them to.
Edit: Looks like Kevin already posted a similar version.
A: There is no real way to do this. There are ways to make it more 'difficult', but there's no concept of completely hidden, inaccessible class attributes.
If the person using your class can't be trusted to follow the API docs, then that's their own problem. Protecting people from doing stupid stuff just means that they will do far more elaborate, complicated, and damaging stupid stuff to try to do whatever they shouldn't have been doing in the first place.
A: You could use a metaclass that auto-wraps methods (or class attributes) that follow a naming convention into properties (shamelessly taken from Unifying Types and Classes in Python 2.2:
class autoprop(type):
def __init__(cls, name, bases, dict):
super(autoprop, cls).__init__(name, bases, dict)
props = {}
for name in dict.keys():
if name.startswith("_get_") or name.startswith("_set_"):
props[name[5:]] = 1
for name in props.keys():
fget = getattr(cls, "_get_%s" % name, None)
fset = getattr(cls, "_set_%s" % name, None)
setattr(cls, name, property(fget, fset))
This allows you to use:
class A:
__metaclass__ = autosuprop
def _readonly(self):
return __x
A: I am aware that William Keller is the cleanest solution by far.. but here's something I came up with..
class readonly(object):
def __init__(self, attribute_name):
self.attribute_name = attribute_name
def __get__(self, instance, instance_type):
if instance != None:
return getattr(instance, self.attribute_name)
else:
raise AttributeError("class %s has no attribute %s" %
(instance_type.__name__, self.attribute_name))
def __set__(self, instance, value):
raise AttributeError("attribute %s is readonly" %
self.attribute_name)
And here's the usage example
class a(object):
def __init__(self, x):
self.x = x
xval = readonly("x")
Unfortunately this solution can't handle private variables (__ named variables).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125034",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: What's the difference between Polymorphism and Multiple Dispatch? ...or are they the same thing? I notice that each has its own Wikipedia entry: Polymorphism, Multiple Dispatch, but I'm having trouble seeing how the concepts differ.
Edit: And how does Overloading fit into all this?
A: Polymorphism is the facility that allows a language/program to make decisions during runtime on which method to invoke based on the types of the parameters sent to that method.
The number of parameters used by the language/runtime determines the 'type' of polymorphism supported by a language.
Single dispatch is a type of polymorphism where only one parameter is used (the receiver of the message - this, or self) to determine the call.
Multiple dispatch is a type of polymorphism where in multiple parameters are used in determining which method to call. In this case, the reciever as well as the types of the method parameters are used to tell which method to invoke.
So you can say that polymorphism is the general term and multiple and single dispatch are specific types of polymorphism.
Addendum: Overloading happens during compile time. It uses the type information available during compilation to determine which type of method to call. Single/multiple dispatch happens during runtime.
Sample code:
using NUnit.Framework;
namespace SanityCheck.UnitTests.StackOverflow
{
[TestFixture]
public class DispatchTypes
{
[Test]
public void Polymorphism()
{
Baz baz = new Baz();
Foo foo = new Foo();
// overloading - parameter type is known during compile time
Assert.AreEqual("zap object", baz.Zap("hello"));
Assert.AreEqual("zap foo", baz.Zap(foo));
// virtual call - single dispatch. Baz is used.
Zapper zapper = baz;
Assert.AreEqual("zap object", zapper.Zap("hello"));
Assert.AreEqual("zap foo", zapper.Zap(foo));
// C# has doesn't support multiple dispatch so it doesn't
// know that oFoo is actually of type Foo.
//
// In languages with multiple dispatch, the type of oFoo will
// also be used in runtime so Baz.Zap(Foo) will be called
// instead of Baz.Zap(object)
object oFoo = foo;
Assert.AreEqual("zap object", zapper.Zap(oFoo));
}
public class Zapper
{
public virtual string Zap(object o) { return "generic zapper" ; }
public virtual string Zap(Foo f) { return "generic zapper"; }
}
public class Baz : Zapper
{
public override string Zap(object o) { return "zap object"; }
public override string Zap(Foo f) { return "zap foo"; }
}
public class Foo { }
}
}
A: With multiple dispatch, a method can have multiple arguments passed to it and which implementation is used depends on each argument's type. The order that the types are evaluated depends on the language. In LISP, it checks each type from first to last.
Languages with multiple dispatch make use of generic functions, which are just function declarations and aren't like generic methods, which use type parameters.
Multiple dispatch allows for subtyping polymorphism of arguments for method calls.
Single dispatch also allows for a more limited kind of polymorphism (using the same method name for objects that implement the same interface or inherit the same base class). It's the classic example of polymorphism, where you have methods that are overridden in subclasses.
Beyond that, generics provide parametric type polymorphism (i.e., the same generic interface to use with different types, even if they're not related — like List<T>: it can be a list of any type and is used the same way regardless).
A: Multiple Dispatch is more akin to function overloading (as seen in Java/C++), except the function invoked depends on the run-time type of the arguments, not their static type.
A: I've never heard of Multiple Dispatch before, but after glancing at the Wikipedia page it looks a lot like MD is a type of polymorphism, when used with the arguments to a method.
Polymorphism is essentially the concept that an object can be seen as any type that is it's base. So if you have a Car and a Truck, they can both be seen as a Vehicle. This means you can call any Vehicle method for either one.
Multiple dispatch looks similar, in that it lets you call methods with arguments of multiple types, however I don't see certain requirements in the description. First, it doesn't appear to require a common base type (not that I could imagine implementing THAT without void*) and you can have multiple objects involved.
So instead of calling the Start() method on every object in a list (which is a classic polymorphism example), you can call a StartObject(Object C) method defined elsewhere and code it to check the argument type at run time and handle it appropriately. The difference here is that the Start() method must be built into the class, while the StartObject() method can be defined outside of the class so the various objects don't need to conform to an interface.
This could be nice if the Start() method needed to be called with different arguments. Maybe Car.Start(Key carKey) vs. Missile.Start(int launchCode)
But both could be called as StartObject(theCar) or StartObject(theMissile)
Interesting concept...
A: if you want the conceptual equivalent of a method invocation
(obj_1, obj_2, ..., obj_n)->method
to depend on each specific type in the tuple, then you want multiple dispatch. Polymorphism corresponds to the case n=1 and is a necessary feature of OOP.
A: Multiple Dispatch is a kind of polymorphism. In Java/C#/C++, there is polymorphism through inheritance and overriding, but that is not multiple dispatch, which is based on two or more arguments (not just this, like in Java/C#/C++)
A: Multiple Dispatch relies on polymorphism based. Typical polymorphism encountered in C++, C#, VB.NET, etc... uses single dispatch -- i.e. the function that gets called only depends on a single class instance. Multiple dispatch relies on multiple class instances.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125050",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "35"
}
|
Q: Synchronize a DataSet and DataTables with Database I am using ADO.net with a DataSet and DataTables for the first time and I have run into a pretty significant problem. Each time that I change anything in my database table definitions, I don't seem to have an easy way to synchronize these changes to my DataTable definitions in my DataSet.
So far, it has been quick to simply delete my DataTable object and drag a new one over from the Server Explorer and recreate my queries. But as my application grows, this will soon become impossible. I also don't want to have to log every single change to the database and remember to change those definitions in each respective column in any affected DataTables.
For the record, I am using VS 2008, MySQL 5.0, ASP.net. ADO.net is my data provider, and I am really regretting it. Thanks.
A: Update: This did not work as per Mathew. Leaving my post here until a better solution comes along.
It seems there is a configure option on DataTable context menu, which should trigger a schema refresh. Can you try it out and let us know?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125074",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: What is the worst project failure you've ever been on? Most importantly, why did it fail?
What I'm looking for here is pathologies that anyone, whether new to the team or just waking up one day,can spot.
A: My latest project failed because all the developers spent their time asking and answering questions on Stack Overflow, trying to build up Karma points :-).
A: Read 'Death March'. That pretty much sums up pathologies etc.
A: A project that had no end. No one knew when it was finished. All we knew was that we had to keep doing whatever that we were doing. And yes sadly it was just as vague as I explained it.
Did it fail. In my opinion it did since we never saw what happened to it at the end, we were moved on to another project, and you can say it faded into the night.
Why? Everything opposite of the 7 project management body of knowledge.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125081",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Transparently swapping pointers to character arrays in C++ I have a 2D character array:
char nm[MAX1][MAX2] = { "john", "bob", "david" };
I want to swap two of these elements (without std::swap) by simply writing
swapPointers(nm[0], nm[1]);
where swapPointers looks like this
void swapPointers(char *&a, char *&b)
{
char *temp = a;
a = b;
b = a;
}
However, this does not compile (and while adding casts makes it compile, the pointers end up pointing to wrong/weird locations).
Can anybody help?
Thanks!
A: Zan is close, but his problem is that his 'swap' function can take any pointer to characters. This can cause problems if misused. Here is a safer version:
void swap(char (&x)[MAX2], char (&y)[MAX2])
{
char temp[MAX2];
memcpy(temp, x, MAX2);
memcpy(x, y, MAX2);
memcpy(y, temp, MAX2);
}
There is also a misunderstanding on the part of the poster: 'nm' is a 2-dimensional array of characters. There are no pointers. nm[0], nm[2], etc... are also not pointers either -- they are still (1-dimensional) arrays. The fact that 1-dimensional arrays are implicitly convertible to pointers causes this type of confusion among many C and C++ programmers.
In order to swap the data in the 2-dimensional array, you have to swap blocks of memory of size MAX2 -- as indicated by both 'swap' functions Zan and I wrote.
A: You cannot swap those pointers by reassigning the pointers, because those pointers point into a 2-D character array.
nm[a] and nm[b] are very strongly const because nm is a truly const object. If it wasn't, you could move C variables around in RAM by reassigning their names.
Just think of the havoc! So you can't do that. :-)
To swap what those pointers point to, you need to swap the values in those array locations.
swap(char *a, char *b)
{
char temp[MAX1];
memcpy(temp, a, MAX1);
memcpy(b, a, MAX1);
memcpy(a, temp, MAX1);
}
A: Your swapPointers() swaps pointers, whereas you're trying to pass it arrays.
If you change
char nm[MAX1][MAX2]
to
char *nm[MAX1]
and fix the small bug in swapPointers() (last line should be b = temp;), it works.
A: The real point is, if you are using c++ then you should be using a std::vector of std::string instead:
std::vector<std::string> nm;
nm.push_back( "john" );
nm.push_back( "bob" );
nm.push_back( "david" );
std::swap( nm[0], nm[1] );
Note: not tested.
A: void swapPointers(char** ppa, char** ppb)
{
char* ptemp = *ppa;
*ppb = *ppa;
*ppa = ptemp;
}
swapPointers(&nm[0], &nm[1]);
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125094",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Can I turn off impersonation just in a couple instances I have an app that has impersonation used throughout. But when a user is logged in as an admin, a few operation require them to write to the server itself. Now if these users do not have rights on the actual server (some don't) it will not let them write.
What I want to do is turn off impersonation for just a couple commands.
Is there a way to do something like this?
using(HostingEnvironment.Impersonate.Off())
//I know this isn't a command, but you get the idea?
Thank you.
A: Make sure the Application Pool do have the proper rights that you need.
Then, when you want to revert to the application pool identity... run the following:
private WindowsImpersonationContext context = null;
public void RevertToAppPool()
{
try
{
if (!WindowsIdentity.GetCurrent().IsSystem)
{
context = WindowsIdentity.Impersonate(System.IntPtr.Zero);
}
}
catch { }
}
public void UndoImpersonation()
{
try
{
if (context != null)
{
context.Undo();
}
}
catch { }
}
A: I am not sure if this is the preferred approach but when I wanted to do this I new'd up an instance of a WindowsIdentity and called the Impersonate method. This allows subsequent code to impersonate a different Windows user. It returns a WindowsImpersonationContext that has an Undo method which reverts the impersonation context back again.
A: You could turn off authentication for the page and then manually impersonate the authenticated user during the remainder of your code.
http://support.microsoft.com/kb/306158
This has a reference to that last part, but basically you impersonate User.Identity
This will mean you will have to impersonate at the beginning of any call to the page, turn it off when you need it off, then turn it back on when you are done, but it should be a workable solution.
A: I just ended up giving the folders write permissions to "Authenticated Users"
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125096",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
}
|
Q: Formula for controlling the movement of a tank-like vehicle? Anyone know the formula used to control the movement of a simple tank-like vehicle?
To 'steer' it, you need to alter the force applied the left and right "wheels". Eg. 1 unit of force on both wheels makes it go forward. -1 units of force on both wheels makes it go backwards. Apply more force to one wheel than the other and it turns.
How would you calculate the amount of force needed on both wheels to turn the tank a certain number of degrees either way?
Or am I thinking about this in the wrong way?
edit:
As William Keller mentioned I missed out the speed of the tank. Assume 1 unit of force on both wheels moves the tank forward at 1 unit per second.
For anyone who's interested, I just found this thread on gamedev.net:
http://66.102.9.104/search?q=cache:wSn5t58ACJwJ:www.gamedev.net/community/forums/topic.asp%3Ftopic_id%3D407491+tank+track+radius+velocity&hl=en&ct=clnk&cd=1&gl=za&client=firefox-a
Another thread:
http://www.physicsforums.com/showthread.php?t=220317
It turns out the key to finding the formula was just knowing the correct terminology ("skid steer") :P
A: You're thinking about it the wrong way. The thing is, differing amounts of force on the tracks will not turn the tank a certain number of degrees. Rather, differing force will alter the RATE of turn.
The relationship between the force and the turn rate will vary depending on the mechanics of the tank. The wider the tank the slower it turns. The faster the tank the faster it turns.
P.S. Some more thoughts on this: I don't think a physics-based answer is possible without basing it off a real-world tank. Several of the answers address the physics of the turn but there is the implicit assumption in all of them that the system has infinite power. Can the tank really operate at 1, -1? And can it reach that velocity instantly--acceleration applies to turns, also.
Finally, treads have length as well as width. That means you are going to get some sideways slippage of the treads in any turning situation, the faster the turn the more such slippage will be required. That is going to burn up energy in a sharp turn, even if the engine has the power to do a 1, -1 turn it wouldn't turn as fast as that would indicate because of friction losses.
A: For a skid steered vehicle that is required to turn in radius 'r' at a given speed 'Si' of the Inner Wheel/Track, the Outer track must be driven at speed 'So' :
So = Si * ((r+d)/r)
Details:
In Skid Steering, a turn is performed by the outer wheels/track traveling further distance than the inner wheels/track.
Furthermore, the extra distance traveled is completed in the same time as the inner track, meaning that the outer wheels/track must run faster.
Circle circumference circumscribed by "Inner" track:
c1 = 2*PI*r
'r' is radius of circle origin to track/wheel
Circle circumference circumscribed by "Outer" track:
c2 = 2*PI*(r+d)
'r' is radius of circle origin to inner track/wheel
'd' is the distance between the Inner and Outer wheels/track.
Furthermore, c2 = X * c1, which says that c2 is proportionally bigger than c1
X = c2 / c1
X = 2*PI*(r+d) / 2*PI*r
X = (r+d)/r
Therefore for a skid steered vehicle that is required to turn in radius 'r' at a given speed 's' of the Inner Wheel/Track, the Outer track must be driven at :
So = Si * ((r+d)/r)
Where:
'So' = Speed of outer track
'Si' = Speed of inner track
'r' = turn radius from inner track
'd' = distance between vehicle tracks.
********* <---------------- Outer Track
**** | ****
** |<--------**----------- 'd' Distance between tracks
* *******<-------*---------- Inner Track
* *** ^ *** *
* * |<-----*------*-------- 'r' Radius of Turn
* * | * *
* * O * *
* * * *
* * * *
* *** *** *
* ******* *
** **
**** ****
*********
A: Change in angle (in radians/sec) = (l-r)/(radius between treads)
Velocity = l+r
For the dtheta, imagine you had a wooden pole between your two hands, and you want to calculate how much it rotates depending on how hard and which way your hands are pressing - you want to figure out:
how much surface distance on the pole you cover per sec -> how many rotations/sec that is -> how many radians/sec (i.e. mult by 2pi)
A: Well, keep in mind that you're also talking about duration here. You need to find out the forces taking in to account the speed at which the tank turns at (1, -1).
I.E., if the tank takes one second to spin 360˚ at (1, -1), and you want to spin 180˚ in one second, (.5, -.5) would do the trick. If you wanted to spin the same amount in half a second, then (1, -1) would work.
This is all further complicated if you use abs(lrate) != abs(rrate), in which case you'll probably need to break out a pencil!
A: Here's how I would attack the tank problem.
The center of the tank will probably be moving by the average speed of the right and left tracks. At the same time, the tank will be rotating clockwise around it's center by ([left track speed] * -[right track speed]) / [width].
This should give you speed and a direction vector.
Disclaimer: I have not tested this...
A: It has been a while since I did any physics but I would have thought that the apposing forces of the two tracks moving in opposite directions results in a torque about the center of mass of the tank.
It is this torque that results in the angular momentum of the tank which is just another way of saying the tank starts to rotation.
A: It's not a matter of force - it depends on the difference in velocity between the 2 sides, and how long that difference holds (also the tank's width, but that's just a constant parameter).
Basically, you should calculate it along these lines:
*
*The velocity ratio between the 2 sides is the same as the radius ratio.
*The tank's width is the actual difference between the 2 rasiuses (sp?).
*Using those 2 numbers, find the actual values for the radius.
*Multiply the velocity of one of the sides by the time it was moving to get the distance it traveled.
*Calculate what part of a full circle it traveled by dividing that into that circle's perimeter.
A: I'd say you're thinking about it in the wrong way.
Increasing the difference in speed between the two treads doesn't cause degrees of turn - they, combined with time (distance at different speed) cause degrees of turn.
The more of a difference in tread speed, the less time needed to achieve X degrees of turn.
So, in order to come up with a formula, you'll have to make a few assumptions. Either turn at a fixed rate, and use time as your variable for turning X degrees, or set a fixed amount of time to complete a turn, and use the track speed difference as your variable.
A: You could look at it by saying : each track describes a circle.
In the case where one track is turning (lets say the left) and the other isn't, then the facing will be dependant on how long and how far the left tracks turn for.
This distance will be the speed of the tracks x time.
Now draw a triangle with this distance, and the wheelbase pencilled in, plus some sin and cos equations & approximations, and you might get an approximate equation like :
facing change = distance travelled by tracks / wheelbase
Then you could incorporate some accelleration to be more realistic: More physics...
The speed isn't constant - it accellerates (and decellerates).
ie over a period of time the speed increases from 0, to 0.1 ... 0.2 ... 0.3 up to a limit.
Of course, as the speed changes, the rate of change of the facing changes too - a bit more realistic than the speed (and thus rate of change of the facing) being entirely constant.
In other words, instead of controlling the speed, the player controls the change in speed. This would make the speed go from 0 ... 0.02 ... 0.06 ... 0.1 etc. as the player pushes the controller. Similarly for decelleration, but a bit more rapidly probably.
hope this helps.
A: I think everyone should also take note of the fact that tanks can turn on a Zero-radius circle: by applying the same speed on each track but on opposite directions, tanks can turn on a dime.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125099",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Free/Open Source Test Generator for Java? Are there any libraries for Java that can generate unit tests or unit test skeletons for existing code? I'm looking for something similar to pythoscope. Ideally it would generate code that follows JUnit4 or TestNG conventions.
It looks like Agitar does something like this, but I'm looking for something free.
A: Most IDEs will generate test method stubs for any class. I know Eclipse will.
New->JUnit Class
then you tell it which class you're testing and what methods you want to test.
A: My plugin creates unit tests, it does both JUnit or TestNG tests. Hope you like it.
http://fast-code.sourceforge.net/
A: Agitar had a free service that allowed you to do this (you had to be prepared to let it send your code to their servers).
If you were using scala, there is "scala check" - but I am not sure how well it works with java source (even though its all byte code).
A: I've been trying to get TestGen4j running but it looks like it hasn't been maintained :( Perhaps you give that a try?
A: from the makers of the google : ( https://developers.google.com/java-dev-tools/codepro/doc/features/junit/test_case_generation)
CodePro Analytix
The CodePro JUnit Test Case Generation facility allows you to automate the creation of comprehensive JUnit regression test cases. Given an input class, the tool creates a corresponding test class complete with multiple test methods for each input class method. The tool analyzes each method and input argument with the goal of generating test cases that exercise each line of code
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125100",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
}
|
Q: Adding New Element to Text Substring Say I have the following string:
"I am the most foo h4ck3r ever!!"
I'm trying to write a makeSpecial(foo) function where the foo substring would be wrapped in a new span element, resulting in:
"I am the most <span class="special">foo></span> h4ck3r ever!!"
BeautifulSoup seemed like the way to go, but I haven't been able to make it work.
I could also pass this to the browser and do it with javascript, but that doesn't seem like a great idea.
Some advice for this would be really useful, especially in python.
A: How about this:
Python 2.5.1 (r251:54863, Apr 18 2007, 08:51:08) [MSC v.1310 32 bit (Intel)] on
win32
Type "help", "copyright", "credits" or "license" for more information.
>>> def makeSpecial(mystring, special_substr):
... return mystring.replace(special_substr, '<span class="special">%s</span>
' % special_substr)
...
>>> makeSpecial("I am the most foo h4ck3r ever!!", "foo")
'I am the most <span class="special">foo</span> h4ck3r ever!!'
>>>
A: As far as I can tell, you're doing a simple string replace. You're replacing "foo" with "bar foo bar." So from string you could just use
replace(old, new[, count])
Return a copy of the string with all occurrences of substring old replaced by new. If the optional argument count is given, only the first count occurrences are replaced.
So for you it would be:
myStr.replace("foo", "<span>foo</span>")
A: If you wanted to do it with javascript/jQuery, take a look at this question: Highlight a word with jQuery
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125102",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Where can I find a template for documentation about server-side installation of software? I'm looking for a good template on server-side installation of software for a project I'm working on.
The client-side is pretty straight-forward. The server-side installation is a little trickier. It is made up of several pieces (services, database connections, dependencies, ports that need to be unblocked, etc.). During a recent test, several undocumented pieces were discovered. Now I need to create installation documentation for our disaster-recovery plans and ways to test the installation without necessarily having a "full-up" system to test on.
I'd really like a suggestion of where I can get a template or a really good example of such a document. I'd like it to be something that an operator could read and comprehend in the heat of a recovery.
[EDIT]
Our current documentation comes mainly from the questions our administrators have had during off-site tests. As new code is written, I'd like to make sure the documentation is written ahead of time. I've been collecting VMWare images to start testing, but was looking for some good examples. It's a Windows Server shop (2000 & 2003). Word templates would be great, but if I could see good documentation, I could create the templates. Any suggestions about what should be tested would be great as well.
[2nd EDIT]
I've gotten several good ideas from the answers posted. After changing my Google search, I came up with some good starting points. They're not perfect, but they are a good start.
Microsoft Exchange - http://technet.microsoft.com/en-us/library/bb125074(EXCHG.65).aspx
iPhone - http://manuals.info.apple.com/en_US/Enterprise_Deployment_Guide.pdf
http://www.novell.com/documentation/gwgateways/gw7_exch/index.html?page=/documentation/gwgateways/gw7_exch/data/ab32nt1.html
http://cregan.wordpress.com/2006/06/22/exchange-2003-step-by-step-installation-instructions/
http://technet.microsoft.com/en-us/magazine/cc160942.aspx
Covers planning in the design stage well - http://www.onlamp.com/pub/a/onlamp/2004/04/08/disaster_recovery.html?page=2
[Edit 10/29/2008]
THIS is the type sample I was looking for. It doesn't have a lot of garbage, but seems to explain enough of the why along with the how http://wiki.alfresco.com/wiki/Installing_Labs_3_Nile
A: The most complete method that we've come up with for creating our DR documentation, involves going through a full cycle (or two) of installation, and documenting each step along the way.
I realize this can be a bit difficult if you don't have a test (or replacement) system to use to create your documentation - but it's worth lobbying for running through this cycle at least once.
(I recommend twice, the second being done by someone not involved with the project - this is how you test the documentation for future admins, who may not be as experienced with the process.)
A side effect of the above is that your documentation grows fairly large - last I had to do it, I believe the completed installation manual for our database servers was 30+ pages.
A: Depending on the admins, automation is helpful. I've had windows admins that want a Word doc with step by step instructions and other admins that wanted a script.
However, some helpful things to include, probably as sections
*
*Database changes
*
*Scripts to run
*Verification that they worked
*Configuration changes
*
*what are the change
*where is a version of the new file (In my case they diffed the two, which helped reduced errors concerning production-specific values)
*General verification
*
*what should be different from the user perspective (feature changes)
*For web farm deployments, it might be helpful to have a coordination document concerning how the servers need to be pulled in and out of pool.
A: What should be tested? Well, in the case of a web site, "can you get to the page?" Include a URL as a starting point and let the admin click through to a certain point. It is not necessary for the admin to go through the whole QA cycle, just a confirmation that what you meant to be deployed is really what got deployed.
Other ideas
Also, we (my team at my last job) had QA test the deployment. As a QA person should be, he was not intimate with the details and as he deployed to QA, we were able to get feedback on what went wrong.
Another thing that is useful is sitting down with the admin(s) before the deployment. Go over the instructions and make sure they understand them the same way you do.
Template? Just make sections that have fields for data such as URL to DEV, QA, and PROD. When you write out the instruction you can refer to those. Just make it clear what is being deployed.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125107",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: winForms + DataGridView binding to a List I'm trying to bind a List<T> to a DataGridView control, and I'm not having any luck creating custom bindings.
I have tried:
gvProgramCode.DataBindings.Add(new Binding("Opcode",code,"Opcode"));
It throws an exception, saying that nothing was found by that property name.
The name of the column in question is "Opcode". The name of the property in the List<T> is Opcode.
ANSWER EDIT: the problem was that I did not have the bindable fields in my class as properties, just public fields...Apparently it doesn't reflect on fields, just properties.
A: I can't really tell what you're trying to do with the example you included, but binding to a generic list of objects is fairly straightforward if you just want to list the objects:
private BindingSource _gridSource;
private BindingSource GridSource
{
get
{
if (_gridSource == null)
_gridSource = new BindingSource();
return _gridSource;
}
}
private void Form1_Load(object sender, EventArgs e)
{
List<FluffyBunny> list = new List<FluffyBunny>();
list.Add(new FluffyBunny { Color = "White", EarType = "Long", Name = "Stan" });
list.Add(new FluffyBunny { Color = "Brown", EarType = "Medium", Name = "Mike" });
list.Add(new FluffyBunny { Color = "Mottled", EarType = "Short", Name = "Torvald" });
GridSource.DataSource = list;
dataGridView1.Columns["EarType"].Visible = false; //Optionally hide a column
dataGridView1.DataSource = GridSource;
}
If you only want to display specific properties of the List's type you should be able to make the unwanted column(s) invisible.
Technically, you don't really need to create the BindingSource, but I find it's a whole lot easier when I'm doing updates or changes if I have it.
Hope this helps.
A: Had the same issue... I had a struct with public fields obviously. nothing in the grid. provided public getters, worked.
A: Another solution I've found is to use the BindingList collection.
private void Form1_Load(object sender, EventArgs e)
{
BindingList people= new BindingList {
new Person {Name="John",Age=23},
new Person {Name="Lucy",Age=16}
};
dataGridView1.DataSource= people;
}
It works fine for me,
A: Is the property on the grid you are binding to Opcode as well?.. if you want to bind directly to List you would just DataSource = list. The databindings allows custom binding. are you trying to do something other than the datasource?
You are getting a bunch of empty rows? do the auto generated columns have names? Have you verified data is in the object (not just string.empty) ?
class MyObject
{
public string Something { get; set; }
public string Text { get; set; }
public string Other { get; set; }
}
public Form1()
{
InitializeComponent();
List<MyObject> myList = new List<MyObject>();
for (int i = 0; i < 200; i++)
{
string num = i.ToString();
myList.Add(new MyObject { Something = "Something " + num , Text = "Some Row " + num , Other = "Other " + num });
}
dataGridView1.DataSource = myList;
}
this should work fine...
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125109",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34"
}
|
Q: PHP code to convert a MySQL query to CSV What is the most efficient way to convert a MySQL query to CSV in PHP please?
It would be best to avoid temp files as this reduces portability (dir paths and setting file-system permissions required).
The CSV should also include one top line of field names.
A: Check out this question / answer. It's more concise than @Geoff's, and also uses the builtin fputcsv function.
$result = $db_con->query('SELECT * FROM `some_table`');
if (!$result) die('Couldn\'t fetch records');
$num_fields = mysql_num_fields($result);
$headers = array();
for ($i = 0; $i < $num_fields; $i++) {
$headers[] = mysql_field_name($result , $i);
}
$fp = fopen('php://output', 'w');
if ($fp && $result) {
header('Content-Type: text/csv');
header('Content-Disposition: attachment; filename="export.csv"');
header('Pragma: no-cache');
header('Expires: 0');
fputcsv($fp, $headers);
while ($row = $result->fetch_array(MYSQLI_NUM)) {
fputcsv($fp, array_values($row));
}
die;
}
A: If you'd like the download to be offered as a download that can be opened directly in Excel, this may work for you: (copied from an old unreleased project of mine)
These functions setup the headers:
function setExcelContentType() {
if(headers_sent())
return false;
header('Content-type: application/vnd.ms-excel');
return true;
}
function setDownloadAsHeader($filename) {
if(headers_sent())
return false;
header('Content-disposition: attachment; filename=' . $filename);
return true;
}
This one sends a CSV to a stream using a mysql result
function csvFromResult($stream, $result, $showColumnHeaders = true) {
if($showColumnHeaders) {
$columnHeaders = array();
$nfields = mysql_num_fields($result);
for($i = 0; $i < $nfields; $i++) {
$field = mysql_fetch_field($result, $i);
$columnHeaders[] = $field->name;
}
fputcsv($stream, $columnHeaders);
}
$nrows = 0;
while($row = mysql_fetch_row($result)) {
fputcsv($stream, $row);
$nrows++;
}
return $nrows;
}
This one uses the above function to write a CSV to a file, given by $filename
function csvFileFromResult($filename, $result, $showColumnHeaders = true) {
$fp = fopen($filename, 'w');
$rc = csvFromResult($fp, $result, $showColumnHeaders);
fclose($fp);
return $rc;
}
And this is where the magic happens ;)
function csvToExcelDownloadFromResult($result, $showColumnHeaders = true, $asFilename = 'data.csv') {
setExcelContentType();
setDownloadAsHeader($asFilename);
return csvFileFromResult('php://output', $result, $showColumnHeaders);
}
For example:
$result = mysql_query("SELECT foo, bar, shazbot FROM baz WHERE boo = 'foo'");
csvToExcelDownloadFromResult($result);
A: // Export to CSV
if($_GET['action'] == 'export') {
$rsSearchResults = mysql_query($sql, $db) or die(mysql_error());
$out = '';
$fields = mysql_list_fields('database','table',$db);
$columns = mysql_num_fields($fields);
// Put the name of all fields
for ($i = 0; $i < $columns; $i++) {
$l=mysql_field_name($fields, $i);
$out .= '"'.$l.'",';
}
$out .="\n";
// Add all values in the table
while ($l = mysql_fetch_array($rsSearchResults)) {
for ($i = 0; $i < $columns; $i++) {
$out .='"'.$l["$i"].'",';
}
$out .="\n";
}
// Output to browser with appropriate mime type, you choose ;)
header("Content-type: text/x-csv");
//header("Content-type: text/csv");
//header("Content-type: application/csv");
header("Content-Disposition: attachment; filename=search_results.csv");
echo $out;
exit;
}
A: Look at the documentation regarding the SELECT ... INTO OUTFILE syntax.
SELECT a,b,a+b INTO OUTFILE '/tmp/result.txt'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM test_table;
A: An update to @jrgns (with some slight syntax differences) solution.
$result = mysql_query('SELECT * FROM `some_table`');
if (!$result) die('Couldn\'t fetch records');
$num_fields = mysql_num_fields($result);
$headers = array();
for ($i = 0; $i < $num_fields; $i++)
{
$headers[] = mysql_field_name($result , $i);
}
$fp = fopen('php://output', 'w');
if ($fp && $result)
{
header('Content-Type: text/csv');
header('Content-Disposition: attachment; filename="export.csv"');
header('Pragma: no-cache');
header('Expires: 0');
fputcsv($fp, $headers);
while ($row = mysql_fetch_row($result))
{
fputcsv($fp, array_values($row));
}
die;
}
A: SELECT * INTO OUTFILE "c:/mydata.csv"
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY "\n"
FROM my_table;
(the documentation for this is here: http://dev.mysql.com/doc/refman/5.0/en/select.html)
or:
$select = "SELECT * FROM table_name";
$export = mysql_query ( $select ) or die ( "Sql error : " . mysql_error( ) );
$fields = mysql_num_fields ( $export );
for ( $i = 0; $i < $fields; $i++ )
{
$header .= mysql_field_name( $export , $i ) . "\t";
}
while( $row = mysql_fetch_row( $export ) )
{
$line = '';
foreach( $row as $value )
{
if ( ( !isset( $value ) ) || ( $value == "" ) )
{
$value = "\t";
}
else
{
$value = str_replace( '"' , '""' , $value );
$value = '"' . $value . '"' . "\t";
}
$line .= $value;
}
$data .= trim( $line ) . "\n";
}
$data = str_replace( "\r" , "" , $data );
if ( $data == "" )
{
$data = "\n(0) Records Found!\n";
}
header("Content-type: application/octet-stream");
header("Content-Disposition: attachment; filename=your_desired_name.xls");
header("Pragma: no-cache");
header("Expires: 0");
print "$header\n$data";
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125113",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "136"
}
|
Q: Rounded corners, is this Mozilla specific? I was looking at how some site implemented rounded corners, and the CSS had these odd tags that I've never really seen before.
-moz-border-radius-topright: 5px;
-webkit-border-top-right-radius: 5px;
-moz-border-radius-bottomright: 5px;
-webkit-border-bottom-right-radius: 5px;
I googled it, and they seem to be Firefox specific tags?
Update
The site I was looking at was twitter, it's wierd how a site like that would alienate their IE users.
A: The -moz-* properties are Gecko-only (Firefox, Mozilla, Camino), the -webkit-* properties are WebKit-only (Chrome, Safari, Epiphany). Vendor-specific prefixes are common for implementing CSS capabilities that have not yet been standardized by the W3C.
Twitter's not "alienating" their IE users. There's simply adding style for browsers that support it.
A: The -moz ones are firefox specific, the -webkit ones are for safari, chrome and a few other browsers that use that rendering engine.
These are early implementations of attributes that are defined in CSS3 so they will be coming in the future without the prefixes.
A: I suggest browsing the site from IE or some other browser. I bet you get different markup.
A: Those are for Firefox (the ones labeled -moz-border) and for Safari (-webkit-border).
A: Yes, anything with a -moz in front of it will only work in Firefox.
The -webkit ones will only work in webkit-based browsers like Safari, Chrome or Webkit.
See here for many ways to make rounded corners with just normal css tags.
Edit: I don't think that not having rounded corners is exactly alienating, just a slightly different look for IE.
Complete lists of all -moz and -webkit css styles if anyone wants to know
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125117",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: C++ Passing Options To Executable How do you pass options to an executable? Is there an easier way than making the options boolean arguments?
EDIT: The last two answers have suggested using arguments. I know I can code a workable solution like that, but I'd rather have them be options.
EDIT2: Per requests for clarification, I'll use this simple example:
It's fairly easy to handle arguments because they automatically get parsed into an array.
./printfile file.txt 1000
If I want to know what the name of the file the user wants to print, I access it via argv[1].
Now about how this situation:
./printfile file.txt 1000 --nolinebreaks
The user wants to print the file with no line breaks. This is not required for the program to be able to run (as the filename and number of lines to print are), but the user has the option of using if if s/he would like. Now I could do this using:
./printfile file.txt 1000 true
The usage prompt would inform the user that the third argument is used to determine whether to print the file with line breaks or not. However, this seems rather clumsy.
A: You seem to think that there is some fundamental difference between "options" that start with "--" and "arguments" that don't. The only difference is in how you parse them.
It might be worth your time to look at GNU's getopt()/getopt_long() option parser. It supports passing arguments with options such as --number-of-line-breaks 47.
A: I use two methods for passing information:
1/ The use of command line arguments, which are made easier to handle with specific libraries such as getargs.
2/ As environment variables, using getenv.
A: Command-line arguments is the way to go. You may want to consider using Boost.ProgramOptions to simplify this task.
A: Pax has the right idea here.
If you need more thorough two-way communication, open the process with pipes and send stuff to stdin/listen on stdout.
A: You can also use Window's PostMessage() function. This is very handy if the executable you want to send the options to is already running. I can post some example code if you are interested in this technique.
A: The question isn't blazingly clear as to the context and just what you are trying to do - you mean running an executable from within a C++ program? There are several standard C library functions with names like execl(), execv(), execve(), ... that take the options as strings or pointer to an array of strings. There's also system() which takes a string containing whatever you'd be typing at a bash prompt, options and all.
A: I like the popt library. It is C, but works fine from C++ as well.
It doesn't appear to be cross-platform though. I found that out when I had to hack out my own API-compatible version of it for a Windows port of some Linux software.
A: You can put options in a .ini file and use the GetPrivateProfileXXX API's to create a class that can read the type of program options you're looking for from the .ini.
You can also create an interactive shell for your app to change certain settings real-time.
EDIT:
From your edits, can't you just parse each option looking for special keywords associated with that option that are "optional"?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125124",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: WYSIWYG HTML Editor for Windows Mobile forms app I am developing a forms app (not web) for Windows Mobile, using .NET CF 3.5. I need an HTML editor control. I'm looking for something along the lines of a simple FCKEditor, but for using in a forms app (EXE).
Any suggestions?
A: Pocket IE (the web browser included with Windows Mobile) is about as powerful as Netscape 2... without the Javascript support. So using a browser-based HTML editor isn't going to work with it. Opera has most of the power of the desktop version (including DOM and Javascript support), but I'm not sure it has an enbedding facility on Windows Mobile. Also it would mean your app would need a copy of Opera to work and it is commercial software.
I'd suggest you either: Scale back your plans somewhat and forget about WYSIWYG HTML editing. It's a small device with a small screen - is it really necessary for people to edit web content on it?
Or: You write your own small editor from scratch as a Windows.Forms control. If you only wanted to support font, size and color changing (which is 90%) of what people do in these editor boxes, it wouldn't be too hard, but it's still probably a few weeks work for an experienced .NET forms developer.
A: I'm not sure if it is supported by the compact framework but you can try setting designmode on the WebBrowser control through reflection and wrapping it up to make a simple WYSIWYG editor.. think this is the code..
webBrowser1.Document.GetType().GetProperty("designmode").SetValue(webBrowser1.Document, true, null);
A: About all I have that I can suggest is Notepad++. I use it for Web stuff, but it has support for, I believe, 30+ languages, and it's free, so you may be able to find what you're looking for there.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125143",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Long running ruby process that uses ActiveRecord to store records in a database I'm trying to write an app using Ruby on Rails and I'm trying to achieve the following:
The app needs to receive UDP messages coming in on a specific port (possibly 1 or more per second) and store them in the database so that the rest of my Rails app can access it.
I was thinking of writing a separate daemon that would receive these messages and shell out to a ruby script on my rails app that will store the message in the database using the right model. The problem with this approach is that the ruby script will be run very often. It would be better performance-wise if I could just have a long-running ruby process that can constantly receive the UDP messages store them in the database.
Is this the right way to do it? Is there something in the Rails framework that can help with this?
A: You definitely don't want to load the Rails stack for each incoming request -- that would be way too slow; you'll want to use something lower-level to handle the incoming connections. You might look at the internals of Webrick to see a simple server daemon coded in ruby -- or, if you want something more performant, look at Mongrel or Thin.
In general, the whole Rails stack isn't gonna help you a lot here -- most of it is geared towards serving web apps, not saving stuff straight off the wire.
The part of Rails that will probably help you the most is ActiveRecord -- there's a pretty good chance you'll want to use that to store your Model data in the database. In fact, you should be able to include your actual Rails models and use them in your UDP-monitoring process. Check the ActiveRecord docs for examples in connecting to a database outside your Rails project.
A: I have an application that does something similar to this, i.e receiving lots of messages on a port and persisting them to the database. We addressed a number of issues when evolving the design of our database, including the fact that we must not lose messages even if the database was unavailable for some reason.
For performance reasons and to ensure we did not lose messages we went for a two stage process. We wrote a small handler that listened for messages and then persisted them to a message queue using Apache Active-MQ. We then used the ActiveMessaging plugin within a separate rails application to consume the messages from the queue and persist them to the database. This method makes it easy to scale the listeners and will result in a much higher messaging throughput.
If you were to go this route then you might want to look at the Fuse implementation of Active-MQ which is generally a few versions further on that the Apache version.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125146",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: 1000's of locations have a desktop application that need to upload diff's of their db to a central store There are literally thousands of locations where an application is running (.net desktop app),and a requirement is to have the updates to their database (diff from the last upload) to be sent to a central server that will store all the locations data.
What options do I have in coming up with a solution?
One idea is to generate an XML file with all the rows since the last sych., and upload that to a file server that will then be imported into the main database.
Note: the changed data will be minimal since this process will run every few hours.
A: I would probably make an effort to avoid writing this code. It sounds like the kind of problem database replication was designed to solve. It would depend on criteria you don't communicate in your question, such as database engine in use, available transports, whether different locations updates would overlap each other, the design of the database as it relates to keys and unique indexes, etc.
A: Since there are 1000's of locations I suggest that each desktop application sends an xml string to a web server. The web server application could update the database upon receiving the string.
If you are using .NET on both sides you could implement this using the framework's Web Services classes.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125153",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Relative path reference in WebConfig.ConnectionString Is it possible to specify a relative path reference in connectionstring, attachDbFileName property in a web.config?
For example, In my database is located in the App_data folder, I can easily specify the AttachDBFilename as|DataDirectory|\mydb.mdf and the |Datadirectory| will automatically resolve to the correct path.
Now, suppose that web.config file is located in A folder, but the database is located in B\App_data folder, where A and B folder is located in the same folder. Is there anyway to use relative path reference to resolve to the correct path?
A: I had the same problem with the following scenario: I wanted to use the same database as the application from my integration tests.
I went with the following workaround:
In the App.config of my test-project I have:
<appSettings>
<add key="DataDirectory" value="..\..\..\BookShop\App_Data\"/>
</appSettings>
In the test-setup I execute the following code:
var dataDirectory = ConfigurationManager.AppSettings["DataDirectory"];
var absoluteDataDirectory = Path.GetFullPath(dataDirectory);
AppDomain.CurrentDomain.SetData("DataDirectory", absoluteDataDirectory);
A: It depends on where your '|DataDirectory|' is located. If the resolved value of '|DataDirectory|' is in folder A (where the web.config is), then no - you can't specify a relative path that is not a subfolder of the resolved value of '|DataDirectory|'.
What you can do is set the value of '|DataDirectory|' to be wherever you would like, by calling the AppDomain.SetData method.
From the MSDN online documentation:
When DataDirectory is used, the resulting file path cannot be higher in the directory structure than the directory pointed to by the substitution string. For example, if the fully expanded DataDirectory is C:\AppDirectory\app_data, then the sample connection string shown above works because it is below c:\AppDirectory. However, attempting to specify DataDirectory as |DataDirectory|..\data will result in an error because \data is not a subdirectory of \AppDirectory.
Hope this helps.
A: Add the following attributes to the test method:
[DeploymentItem("..\\TestSolutionDir\\TestProjedtDir\\TestDataFolder\\TestAutomationSpreadsheet.xlsx")]
[DataSource("System.Data.Odbc", "Dsn=Excel Files;dbq=|DataDirectory|\\TestAutomationSpreadsheet.xlsx", "SpreadsheetTabName$", DataAccessMethod.Sequential)]
The |DataDirctory| variable is defined by the system when it runs the test. The DeploymentItem copies the spreadsheet there. You point to the spreadsheet and to the tab within the spreadsheet that the data is coming from. Right-click on the tab to rename it to something easy to remember.
A: In IIS you could also create a virtual directory that points at wherever the the real database is kept. Then your connection string just references the virtual directory.
A: Web.config
<appSettings>
<add key="FilePath" value="App_Data\SavedFiles\"/>
</appSettings>
Path.cs
string filePath = AppDomain.CurrentDomain.BaseDirectory + (ConfigurationManager.AppSettings["FilePath"]);
Works for me..!!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125157",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
}
|
Q: Passing a regex substitution as a variable in Perl I need to pass a regex substitution as a variable:
sub proc {
my $pattern = shift;
my $txt = "foo baz";
$txt =~ $pattern;
}
my $pattern = 's/foo/bar/';
proc($pattern);
This, of course, doesn't work. I tried eval'ing the substitution:
eval("$txt =~ $pattern;");
but that didn't work either. What horribly obvious thing am I missing here?
A: Well, you can precompile the regular expression using the qr// operator. But you can't pass an operator (s///).
$pattern = qr/foo/;
print "match!\n" if $text =~ $pattern;
But if you have to pass the substitution operator, you are down to passing either code or strings:
proc('$text =~ s/foo/bar');
sub proc {
my $code = shift;
...
eval $code;
}
or, code:
proc(sub {my $text = shift; $text =~ s/foo/bar});
sub proc {
my $code = shift;
...
$code->("some text");
}
A: sub proc {
my($match, $subst) = @_;
my $txt = "foo baz";
$txt =~ s/$match/$subst/;
print "$txt\n";
}
my $matcher = qr/foo/;
my $sub_str = "bar";
proc($matcher, $sub_str);
This rather directly answers your question. You can do more - but when I used a qr// term instead of the $sub_str as a simple literal, then the expanded regex was substituted.
I recently needed to create a parser (test parser) for statements with some peculiar (dialect of) SQL types, recognizing lines such as this, splitting it into three type names:
input: datetime year to second,decimal(16,6), integer
The script I used to demo this used quoted regexes.
#!/bin/perl -w
use strict;
while (<>)
{
chomp;
print "Read: <$_>\n";
my($r1) = qr%^input\s*:\s*%i;
if ($_ =~ $r1)
{
print "Found input:\n";
s%$r1%%;
print "Residue: <$_>\n";
my($r3) = qr%(?:year|month|day|hour|minute|second|fraction(?:\([1-5]\))?)%;
my($r2) = qr%
(?:\s*,?\s*)? # Commas and spaces
(
(?:money|numeric|decimal)(?:\(\d+(?:,\d+)?\))? |
int(?:eger)? |
smallint |
datetime\s+$r3\s+to\s+$r3
)
%ix;
while ($_ =~ m/$r2/)
{
print "Got type: <$1>\n";
s/$r2//;
}
print "Residue 2: <$_>\n";
}
else
{
print "No match:\n";
}
print "Next?\n";
}
We can argue about the use of names like $r1, etc. But it did the job...it was not, and is not, production code.
A: s/// is not a regex. Thus, you can't pass it as a regex.
I don't like eval for this. It's very fragile, with a lot of bordercases.
I think it's best to take an approach similar to the one JavaScript takes: pass both a regex (in Perl, that is qr//) and a code reference for the substitution. For example, to pass parameters to get the same effect as
s/(\w+)/\u\L$1/g;
You can call
replace($string, qr/(\w+)/, sub { "\u\L$1" }, 'g');
Note that the 'g' modifier is not actually a flag for the regex (I think attaching it to the regex is a design mistake in JavaScript), so I chose to pass it in a third parameter.
Once the API has been decided on, the implementation can be done next:
sub replace {
my($string, $find, $replace, $global) = @_;
unless($global) {
$string =~ s($find){ $replace->() }e;
} else {
$string =~ s($find){ $replace->() }ge;
}
return $string;
}
Let's try it:
print replace('content-TYPE', qr/(\w+)/, sub { "\u\L$1" }, 'g');
Result:
Content-Type
That looks good to me.
A: eval "$txt =~ $pattern";
This becomes
eval "\"foo baz\" =~ s/foo/bar/"
and substitutions don't work on literal strings.
This would work:
eval "\$txt =~ $pattern"
but that's not very pleasing. eval is almost never the right solution.
zigdon's solution can do anything, and Jonathan's solution is quite suitable if the replacement string is static. If you want something more structured than the first and more flexible than the second, I'd suggest a hybrid:
sub proc {
my $pattern = shift;
my $code = shift;
my $txt = "foo baz";
$txt =~ s/$pattern/$code->()/e;
print "$txt\n";
}
my $pattern = qr/foo/;
proc($pattern, sub { "bar" }); # ==> bar baz
proc($pattern, sub { "\U$&" }); # ==> FOO baz
A: Perhaps you might rethink your approach.
You want to pass in to a function a regex substitution, probably because the function will be deriving the text to be operated upon from some other source (reading from a file, socket, etc.). But you're conflating regular expression with regular expression substitution.
In the expression, s/foo/bar/, you actually have a regular expression ("/foo/") and a substitution ("bar") that should replace what is matched by the expression. In the approaches you've tried thus far, you ran into problems trying to use eval, mainly because of the likelihood of special characters in the expression that either interfere with eval or get interpolated (i.e., gobbled up) in the process of evaluation.
So instead, try passing your routine two arguments: the expression and the substitution:
sub apply_regex {
my $regex = shift;
my $subst = shift || ''; # No subst string will mean matches are "deleted"
# Some setup and processing happens...
# Time to make use of the regex that was passed in:
while (defined($_ = <$some_filehandle>)) {
s/$regex/$subst/g; # You can decide if you want to use /g etc.
}
# The rest of the processing...
}
This approach has an added benefit: if your regex pattern doesn't have any special characters in it, you can just pass it in directly:
apply_regex('foo', 'bar');
Or, if it does, you can use the qr// quoting-operator to create a regex object and pass that as the first parameter:
apply_regex(qr{(foo|bar)}, 'baz');
apply_regex(qr/[ab]+/, '(one or more of "a" or "b")');
apply_regex(qr|\d+|); # Delete any sequences of digits
Most of all, you really don't need eval or the use of code-references/closures for this task. That will only add complexity that may make debugging harder than it needs to be.
A:
I need to pass a regex substitution as a variable
Do you? Why not pass a code reference? Example:
sub modify
{
my($text, $code) = @_;
$code->($text);
return $text;
}
my $new_text = modify('foo baz', sub { $_[0] =~ s/foo/bar/ });
In general, when you want to pass "something that does something" to a subroutine ("a regex substitution" in the case of your question) the answer is to pass a reference to a piece of code. Higher Order Perl is a good book on the topic.
A: I have an extremely simple script for mass file renaming that employs this trick:
#!/opt/local/bin/perl
sub oops { die "Usage : sednames s/old/new [files ..]\n"; }
oops if ($#ARGV < 0);
$regex = eval 'sub { $_ = $_[0]; ' . shift(@ARGV) . '; return $_; }';
sub regex_rename { foreach (<$_[0]>) {
rename("$_", &$regex($_));
} }
if ($#ARGV < 0) { regex_rename("*"); }
else { regex_rename(@ARGV); }
Any Perl command that modifies $_ like s/old/new could be employed to modify the files.
I decided upon using eval so that the regular expression only needed to be compiled once. There is some wonkiness with eval and $_ that prevented me from using simply:
eval 'sub { ' . shift(@ARGV) . ' }';
Although this &$regex certainly does modify $_, requiring the "$_" to evaluate $_ before calling rename. Yes, eval is quite fragile, like everyone else said.
A: I found a probably better way to do it:
sub proc {
my ($pattern, $replacement) = @_;
my $txt = "foo baz";
$txt =~ s/$pattern/$replacement/g; # This substitution is global.
}
my $pattern = qr/foo/; # qr means the regex is pre-compiled.
my $replacement = 'bar';
proc($pattern, $replacement);
If the flags of the substitution have to be variable, you can use this:
sub proc {
my ($pattern, $replacement, $flags) = @_;
my $txt = "foo baz";
eval('$txt =~ s/$pattern/$replacement/' . $flags);
}
proc(qr/foo/, 'bar', 'g');
Please note that you don't need to escape / in the replacement string.
A: You're right - you were very close:
eval('$txt =~ ' . "$pattern;");
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125171",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
}
|
Q: What's a good tool to screen-scrape with Javascript support? Is there a good test suite or tool set that can automate website navigation -- with Javascript support -- and collect the HTML from the pages?
Of course I can scrape straight HTML with BeautifulSoup. But this does me no good for sites that require Javascript. :)
A: Using HtmlUnit is also a possibility.
HtmlUnit is a "GUI-Less browser for
Java programs". It models HTML
documents and provides an API that
allows you to invoke pages, fill out
forms, click links, etc... just like
you do in your "normal" browser.
It has fairly good JavaScript support
(which is constantly improving) and is
able to work even with quite complex
AJAX libraries, simulating either
Firefox or Internet Explorer depending
on the configuration you want to use.
It is typically used for testing
purposes or to retrieve information
from web sites.
A: Selenium now wraps htmlunit so you don´t need start a browser anymore. The new WebDriver api is very easy to use too. The first example use htmlunit driver
A: You could use Selenium or Watir to drive a real browser.
Ther are also some JavaScript-based headless browsers:
*
*PhantomJS is a headless Webkit browser.
*
*pjscrape is a scraping framework based on PhantomJS and jQuery.
*CasperJS is a navigation scripting & testing utility bsaed on PhantomJS, if you need to do a little more than point at URLs to be scraped.
*Zombie for Node.js
Personally, I'm most familiar with Selenium, which has support for writing automation scripts in a good number of languagues and has more mature tooling, such as the excellent Selenium IDE extension for Firefox, which can be used to write and run testcases, and can export test scripts to many languages.
A: It would be very difficult to code a solution that would work with any arbitrary site out there. Each navigation menu implementation can be quite unique. I've worked a great deal with scrapers, and, provided you know the site you wish to target, here is how I'd approach it.
Usually, if you analyze the particular javascript used in a nav menu, it is fairly easy to use regular expressions to pull out the entire set of variables that are used to build the navmenu. I have never used Beautiful Soup, but from your description it sounds like it may only work on HTML elements and not be able to work inside the script tags.
If you're still having problems, or need to emulate some form POSTs or ajax, get Firefox and install the LiveHttpHeaders plugin. This plugin will allow you to manually browse the site and capture the urls being navigated along with any cookies that are being passed during your manual browsing. That is what you need your scraperbot to send in a request to get a valid response from the target webserver(s). This will also capture any ajax calls being made, and in many cases the same ajax calls must be implementated in your scraper to get your desired responses.
A: Mozenda is a great tool to use as well.
A: Keep in mind that and javascript fanciness is messing with the brower's internal DOM model of the page, and does nothing to the raw HTML.
A: I've been using Selenium for this and it find that it works great.
Selenium runs in Browser and will work with Firefox, Webkit and IE.
http://selenium.openqa.org/
A: @insin Watir is not IE only.
https://stackoverflow.com/questions/81566#83387
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125177",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
}
|
Q: Does Postscript have a concept of a table? What I'm trying to achieve is to determine if the Postscript that I'm parsing contains any element that resides in a table (box).
Im asking whether if it had a built-in way to lay out tabular data on the page. My guess is that postscript doesnt have a concept of a table, cos I couldnt find it anywhere in the spec.
The problem that i need to solve is, I need to find a way to know if certain postscript lies inside a table.
A: Sounds like you are trying to draw something and test if any part of draws within some specified box. You can create a path for the thing to be tested (just don't stroke or fill it), and create another path for the box (e.g. a table cell). Leave these two paths on the stack, and use one of the operators inufill, inustroke, etc.
If you happen to have the Postscript Language Reference 3rd edition, the goodies are listed under "Insideness-Testing Operators" on p. 520, with details in the alphabetical section following that.
A: Short answer is no. it's a low level language for describing where to put ink on a page, no concepts of organizing it besides lines, arcs and beziers connecting x,y points put on the stack.
That said - i have written postscript by hand, and it would be smart to create variables, or arrays of x and of y values to use for aligning points. The arrays would be especially useful inside a for loop which renders the contents and draws border lines. Beware of fencepost bugs!
A: No, you will have to code the table yourself.
I did this once many years ago. After being fed up with TeX, I wrote an interpreter in PostScript that did similar things. Never found much use for it though.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125188",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Any quirks I should be aware of in Drupal's XML-RPC and BlogAPI implementations? I'm beginning work on a project that will access a Drupal site to create (and eventually edit) nodes on the site, via the XML-RPC facility and BlogAPI module shipped with Drupal. This includes file uploading, as the project is to allow people to upload pictures en mass to a Drupal site with minimal ado.
What I'd like to know is if there are any caveats I should look out for. Has anyone had experience targeting Drupal's XML-RPC implementation, or the implementation of any of the blogging APIs supported by its BlogAPI module? What advice would you give to others taking the same path?
A: While the XML-RPC facility is pretty stable and works well, the BlogAPI module has various issues, especially with discovery, that make using it for anything but regular blogs painful. Currently, there is no use of blogIds in the generated Really Simple Discovery document (of which only one exists for a site) or for the blogging APIs implemented in BlogAPI.
Which blog receives a post is determined by user credentials, which works fine as long as only one node type is available for access through BlogAPI, but when you try and have two or more node types available through the module, things tend to fall apart.
Looking at the state of BlogAPI in Drupal's HEAD on CVS, we might not see a solution to this until 8.x at the earliest. However, there are several people working on redeveloping BlogAPI as a third party module, perhaps to merge back in to Drupal core at some later date. If you want to use a well-known blogging API, it might be best to get involved with their effort. If it's something else, though, the XML-RPC facility provided through hook_xmlrpc() does a great job of letting you provide your own XML-RPC interfaces.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125190",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Reinstalling/ un-revoking digital certificate So I have a problem in Vista which I can't run certain applications (well, the entier MS suite, Office, Visual Studio, etc) as an Administrator.
From the looks of the problem it is caused by the Digital Certificate, and the fact that it has been "revoked".
So now I have a problem, how do I un-revoke the digital certificate? Alternatively I've got a copy of the digital certificate from a co-worker whos one is fine. I've tried right-click -> install certificate but it doesn't seem to change anything, nor can I work out how to select it when I try to install a certificate for Visual Studio/ Office.
A: I'd be very very surprised if Microsoft's code-signing certificate had been revoked.
I would first check your system's date and time, to ensure that Windows doesn't think the certificate isn't yet valid or has expired. Then I'd check that you still have the Microsoft Root Certificate Authority root certificate.
UPDATE 2013-05-23: Microsoft have in fact shipped some components whose digital signatures were not properly timestamped, meaning that they won't install properly after the certificate has expired. See Microsoft Security Advisory 2749655 - Compatibility Issues Affecting Signed Microsoft Binaries for more information.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125191",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: STL like container typedef shortcut? A common pattern with STL containers is this:
map<Key, Value> map;
for(map<Key, Value>::iterator iter = map.begin(); iter != map.end(); ++iter)
{
...
}
So in order to avoid writing the declaration of the template parameters we can do this somewhere:
typedef map<Key, Value> TNiceNameForAMap;
But if this map is only used in a single function or for a single iteration this is an annoying overhead.
Is there any way around this typedef?
A: You can use Boost.Foreach
A: A future version of the C++ standard (known as C++0x) will introduce a new use for the auto keyword, allowing you to write something like the following:
map<Key, Value> map;
for(auto iter = map.begin(); iter != map.end(); ++iter)
{
...
}
A: Personally I think MYMAP::iterator is more readable than map<int,string>::iterator or even std::map<int, std::string>::iterator so I always do the typedef. The only overhead is one line of code.
Once the code is compiled, there will be no difference in the size or speed of the executable. It's just about readability.
A: Not sure what you mean by "overhead". If it simplifies the way you write your code, use it, otherwise stick with the longhand.
If it's only used in a restricted scope, put the typedef in that same scope. Then it doesn't need to be published, or documented, or appear on any UML diagrams. For example (and I don't claim this is the best code ever in other respects):
int totalSize() {
typedef std::map<Key, Value> DeDuplicator;
DeDuplicator everything;
// Run around the universe finding everything. If we encounter a key
// more than once it's only added once.
// now compute the total
int total = 0;
for(DeDuplicator::iterator i = everything.begin(); i <= everything.end(); ++i) {
total += i->second.size(); // yeah, yeah, overflow. Whatever.
}
return total;
}
Combining with Ferruccio's suggestion (if you're using boost), the loop becomes:
BOOST_FOREACH(DeDuplicator::pair p, everything) {
total += p.second.size();
}
And combining with bk1e's suggestion (if you're using C++0x or have features from it), and assuming that BOOST_FOREACH interacts with auto in the way I think it should based on the fact that it can normally handle implicit casts to compatible types:
std::map<Key, Value> everything;
// snipped code to run around...
int total = 0;
BOOST_FOREACH(auto p, everything) {
total += p.second.size();
}
Not bad.
A: If the typedef is local to a single function it doesn't even need to be a nice name. Use X or MAP, just like in a template.
A: C++0x will also offer ranged-based for loop, which is similar to iterative for looping in other languages.
Unfortunately, GCC does not yet implement range-based for (but does implement auto).
Edit: In the meanwhile, also consider typedefing the iterator. It doesn't get around the one-use typedef (unless you put that in a header, which is always an option), but it makes the resulting code shorter by one ::iterator.
A: Over the past few years I've really tried to move away from using manually written loops in preference to using the STL algorithms. Your above code can be changed to:
struct DoLoopBody {
template <typename ValueType>
inline void operator()(ValueType v) const {
// ...
}
};
std::for_each (map.begin(), map.end(), DoLoopBody ());
Unfortunately the class DoLoopBody cannot be a local class, which is often highlighted as a disadvantage. However, I see this as an advantage in that the body of the loop can now be unit tested in isolation.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125192",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: extracting text from MS word files in python for working with MS word files in python, there is python win32 extensions, which can be used in windows. How do I do the same in linux?
Is there any library?
A: I know this is an old question, but I was recently trying to find a way to extract text from MS word files, and the best solution by far I found was with wvLib:
http://wvware.sourceforge.net/
After installing the library, using it in Python is pretty easy:
import commands
exe = 'wvText ' + word_file + ' ' + output_txt_file
out = commands.getoutput(exe)
exe = 'cat ' + output_txt_file
out = commands.getoutput(exe)
And that's it. Pretty much, what we're doing is using the commands.getouput function to run a couple of shell scripts, namely wvText (which extracts text from a Word document, and cat to read the file output). After that, the entire text from the Word document will be in the out variable, ready to use.
Hopefully this will help anyone having similar issues in the future.
A: Take a look at how the doc format works and create word document using PHP in linux. The former is especially useful. Abiword is my recommended tool. There are limitations though:
However, if the document has complicated tables, text boxes, embedded spreadsheets, and so forth, then it might not work as expected. Developing good MS Word filters is a very difficult process, so please bear with us as we work on getting Word documents to open correctly. If you have a Word document which fails to load, please open a Bug and include the document so we can improve the importer.
A: (Note: I posted this on this question as well, but it seems relevant here, so please excuse the repost.)
Now, this is pretty ugly and pretty hacky, but it seems to work for me for basic text extraction. Obviously to use this in a Qt program you'd have to spawn a process for it etc, but the command line I've hacked together is:
unzip -p file.docx | grep '<w:t' | sed 's/<[^<]*>//g' | grep -v '^[[:space:]]*$'
So that's:
unzip -p file.docx: -p == "unzip to stdout"
grep '<w:t': Grab just the lines containing '<w:t' (<w:t> is the Word 2007 XML element for "text", as far as I can tell)
sed 's/<[^<]>//g'*: Remove everything inside tags
grep -v '^[[:space:]]$'*: Remove blank lines
There is likely a more efficient way to do this, but it seems to work for me on the few docs I've tested it with.
As far as I'm aware, unzip, grep and sed all have ports for Windows and any of the Unixes, so it should be reasonably cross-platform. Despit being a bit of an ugly hack ;)
A: If your intention is to use purely python modules without calling a subprocess, you can use the zipfile python modude.
content = ""
# Load DocX into zipfile
docx = zipfile.ZipFile('/home/whateverdocument.docx')
# Unpack zipfile
unpacked = docx.infolist()
# Find the /word/document.xml file in the package and assign it to variable
for item in unpacked:
if item.orig_filename == 'word/document.xml':
content = docx.read(item.orig_filename)
else:
pass
Your content string however needs to be cleaned up, one way of doing this is:
# Clean the content string from xml tags for better search
fullyclean = []
halfclean = content.split('<')
for item in halfclean:
if '>' in item:
bad_good = item.split('>')
if bad_good[-1] != '':
fullyclean.append(bad_good[-1])
else:
pass
else:
pass
# Assemble a new string with all pure content
content = " ".join(fullyclean)
But there is surely a more elegant way to clean up the string, probably using the re module.
Hope this helps.
A: Unoconv might also be a good alternative: http://linux.die.net/man/1/unoconv
A: To read Word 2007 and later files, including .docx files, you can use the python-docx package:
from docx import Document
document = Document('existing-document-file.docx')
document.save('new-file-name.docx')
To read .doc files from Word 2003 and earlier, make a subprocess call to antiword. You need to install antiword first:
sudo apt-get install antiword
Then just call it from your python script:
import os
input_word_file = "input_file.doc"
output_text_file = "output_file.txt"
os.system('antiword %s > %s' % (input_word_file, output_text_file))
A: Use the native Python docx module. Here's how to extract all the text from a doc:
document = docx.Document(filename)
docText = '\n\n'.join(
paragraph.text for paragraph in document.paragraphs
)
print(docText)
See Python DocX site
Also check out Textract which pulls out tables etc.
Parsing XML with regexs invokes cthulu. Don't do it!
A: I'm not sure if you're going to have much luck without using COM. The .doc format is ridiculously complex, and is often called a "memory dump" of Word at the time of saving!
At Swati, that's in HTML, which is fine and dandy, but most word documents aren't so nice!
A: If you have LibreOffice installed, you can simply call it from the command line to convert the file to text, then load the text into Python.
A: Is this an old question?
I believe that such thing does not exist.
There are only answered and unanswered ones.
This one is pretty unanswered, or half answered if you wish.
Well, methods for reading *.docx (MS Word 2007 and later) documents without using COM interop are all covered.
But methods for extracting text from *.doc (MS Word 97-2000), using Python only, lacks.
Is this complicated?
To do: not really, to understand: well, that's another thing.
When I didn't find any finished code, I read some format specifications and dug out some proposed algorithms in other languages.
MS Word (*.doc) file is an OLE2 compound file.
Not to bother you with a lot of unnecessary details, think of it as a file-system stored in a file. It actually uses FAT structure, so the definition holds. (Hm, maybe you can loop-mount it in Linux???)
In this way, you can store more files within a file, like pictures etc.
The same is done in *.docx by using ZIP archive instead.
There are packages available on PyPI that can read OLE files. Like (olefile, compoundfiles, ...)
I used compoundfiles package to open *.doc file.
However, in MS Word 97-2000, internal subfiles are not XML or HTML, but binary files.
And as this is not enough, each contains an information about other one, so you have to read at least two of them and unravel stored info accordingly.
To understand fully, read the PDF document from which I took the algorithm.
Code below is very hastily composed and tested on small number of files.
As far as I can see, it works as intended.
Sometimes some gibberish appears at the start, and almost always at the end of text.
And there can be some odd characters in-between as well.
Those of you who just wish to search for text will be happy.
Still, I urge anyone who can help to improve this code to do so.
doc2text module:
"""
This is Python implementation of C# algorithm proposed in:
http://b2xtranslator.sourceforge.net/howtos/How_to_retrieve_text_from_a_binary_doc_file.pdf
Python implementation author is Dalen Bernaca.
Code needs refining and probably bug fixing!
As I am not a C# expert I would like some code rechecks by one.
Parts of which I am uncertain are:
* Did the author of original algorithm used uint32 and int32 when unpacking correctly?
I copied each occurence as in original algo.
* Is the FIB length for MS Word 97 1472 bytes as in MS Word 2000, and would it make any difference if it is not?
* Did I interpret each C# command correctly?
I think I did!
"""
from compoundfiles import CompoundFileReader, CompoundFileError
from struct import unpack
__all__ = ["doc2text"]
def doc2text (path):
text = u""
cr = CompoundFileReader(path)
# Load WordDocument stream:
try:
f = cr.open("WordDocument")
doc = f.read()
f.close()
except: cr.close(); raise CompoundFileError, "The file is corrupted or it is not a Word document at all."
# Extract file information block and piece table stream informations from it:
fib = doc[:1472]
fcClx = unpack("L", fib[0x01a2l:0x01a6l])[0]
lcbClx = unpack("L", fib[0x01a6l:0x01a6+4l])[0]
tableFlag = unpack("L", fib[0x000al:0x000al+4l])[0] & 0x0200l == 0x0200l
tableName = ("0Table", "1Table")[tableFlag]
# Load piece table stream:
try:
f = cr.open(tableName)
table = f.read()
f.close()
except: cr.close(); raise CompoundFileError, "The file is corrupt. '%s' piece table stream is missing." % tableName
cr.close()
# Find piece table inside a table stream:
clx = table[fcClx:fcClx+lcbClx]
pos = 0
pieceTable = ""
lcbPieceTable = 0
while True:
if clx[pos]=="\x02":
# This is piece table, we store it:
lcbPieceTable = unpack("l", clx[pos+1:pos+5])[0]
pieceTable = clx[pos+5:pos+5+lcbPieceTable]
break
elif clx[pos]=="\x01":
# This is beggining of some other substructure, we skip it:
pos = pos+1+1+ord(clx[pos+1])
else: break
if not pieceTable: raise CompoundFileError, "The file is corrupt. Cannot locate a piece table."
# Read info from pieceTable, about each piece and extract it from WordDocument stream:
pieceCount = (lcbPieceTable-4)/12
for x in xrange(pieceCount):
cpStart = unpack("l", pieceTable[x*4:x*4+4])[0]
cpEnd = unpack("l", pieceTable[(x+1)*4:(x+1)*4+4])[0]
ofsetDescriptor = ((pieceCount+1)*4)+(x*8)
pieceDescriptor = pieceTable[ofsetDescriptor:ofsetDescriptor+8]
fcValue = unpack("L", pieceDescriptor[2:6])[0]
isANSII = (fcValue & 0x40000000) == 0x40000000
fc = fcValue & 0xbfffffff
cb = cpEnd-cpStart
enc = ("utf-16", "cp1252")[isANSII]
cb = (cb*2, cb)[isANSII]
text += doc[fc:fc+cb].decode(enc, "ignore")
return "\n".join(text.splitlines())
A: You could make a subprocess call to antiword. Antiword is a linux commandline utility for dumping text out of a word doc. Works pretty well for simple documents (obviously it loses formatting). It's available through apt, and probably as RPM, or you could compile it yourself.
A: benjamin's answer is a pretty good one. I have just consolidated...
import zipfile, re
docx = zipfile.ZipFile('/path/to/file/mydocument.docx')
content = docx.read('word/document.xml').decode('utf-8')
cleaned = re.sub('<(.|\n)*?>','',content)
print(cleaned)
A: Just an option for reading 'doc' files without using COM: miette. Should work on any platform.
A: OpenOffice.org can be scripted with Python: see here.
Since OOo can load most MS Word files flawlessly, I'd say that's your best bet.
A: Aspose.Words Cloud SDK for Python is a platform independent solution to convert MS Word/Open Office files to text. It is a commercial product but free trial plan provides 150 monthly API calls.
P.S: I am a developer evangelist at Aspose.
# For complete examples and data files, please go to https://github.com/aspose-words-cloud/aspose-words-cloud-python
# Import module
import asposewordscloud
import asposewordscloud.models.requests
from shutil import copyfile
# Please get your Client ID and Secret from https://dashboard.aspose.cloud.
client_id='xxxxxxx-xxxx-xxxx-xxxxx-xxxxxxxxxx'
client_secret='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
words_api = asposewordscloud.WordsApi(client_id,client_secret)
words_api.api_client.configuration.host='https://api.aspose.cloud'
filename = 'C:/Temp/02_pages.docx'
dest_name = 'C:/Temp/02_pages.txt'
#Convert RTF to text
request = asposewordscloud.models.requests.ConvertDocumentRequest(document=open(filename, 'rb'), format='txt')
result = words_api.convert_document(request)
copyfile(result, dest_name)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125222",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30"
}
|
Q: MySQL search and replace some text in a field What MySQL query will do a text search and replace in one particular field in a table?
I.e. search for foo and replace with bar so a record with a field with the value hello foo becomes hello bar.
A: And if you want to search and replace based on the value of another field you could do a CONCAT:
update table_name set `field_name` = replace(`field_name`,'YOUR_OLD_STRING',CONCAT('NEW_STRING',`OTHER_FIELD_VALUE`,'AFTER_IF_NEEDED'));
Just to have this one here so that others will find it at once.
A: Change table_name and field to match your table name and field in question:
UPDATE table_name SET field = REPLACE(field, 'foo', 'bar') WHERE INSTR(field, 'foo') > 0;
*
*REPLACE (string functions)
*INSTR (string functions)
A: In my experience, the fastest method is
UPDATE table_name SET field = REPLACE(field, 'foo', 'bar') WHERE field LIKE '%foo%';
The INSTR() way is the second-fastest and omitting the WHERE clause altogether is slowest, even if the column is not indexed.
A: UPDATE table SET field = replace(field, text_needs_to_be_replaced, text_required);
Like for example, if I want to replace all occurrences of John by Mark I will use below,
UPDATE student SET student_name = replace(student_name, 'John', 'Mark');
A: UPDATE table_name
SET field = replace(field, 'string-to-find', 'string-that-will-replace-it');
A: The Replace string function will do that.
A: I used the above command line as follow:
update TABLE-NAME set FIELD = replace(FIELD, 'And', 'and');
the purpose was to replace And with and ("A" should be lowercase). The problem is it cannot find the "And" in database, but if I use like "%And%" then it can find it along with many other ands that are part of a word or even the ones that are already lowercase.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125230",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "297"
}
|
Q: Is there an equivalent to Ctrl-Shift-R in Safari/WebKit? Something that would really reload the page or resource, ignoring whatever might be in cache.
A: Safari always reloads (ctrl+r) a page ignoring whatever that might be in the cache.
As Athena points out, iframes are cached. It's actually not the iframe content, but the request that's cached.
In those cases, Safari caches the
page, and then no matter which link
you click, shows the iframe from the
last click BEFORE the refresh (or
back/forward). It's then stuck on that
content, and shows it for all links.
This is overcome by assigning a different iframe id on each load:
iframe.id = new Date().getTime();
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125245",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Is there a way to validate hAtom microformat? I have implemented hAtom microformat on my blog. At least, I think I have, but I can't find any validator (or any software that uses hAtom) in order to determine if I have done this correctly. A Google search for "hatom validator" currently doesn't return anything useful. Does anyone know of a way to confirm that it is implemented correctly?
A: How about Optimus?
Otherwise, you can slow-validate it by adding it to the list of hAtom examples in the wild. Occasionally someone will go through them and move the bad ones to the list of "Examples with some problems."
A: You might find these useful
http://microformats.org/wiki/hatom#Examples
http://microformats.org/wiki/hatom#Implementations
A: Convert it to Atom, validate Atom and manually check if it contains all data you expected.
There's open-source hCard validator. Maybe someone could adapt it to validate hAtom as well…
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125262",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: How does Perl 6 evaluate truthiness? In reading about Perl 6, I see a feature being trumpeted about, where you no longer have to do:
return "0 but true";
...but can instead do:
return 0 but True;
If that's the case, how does truth work in Perl 6? In Perl 5, it was pretty simple: 0, "", and undef are false, everything else is true.
What are the rules in Perl 6 when it comes to boolean context?
A: According to O'Reilly's Perl 6 and Parrot Essentials, false is 0, undef, the empty string, and values flagged as false. true is everything else.
Also, Perl 6 has both a primitive boolean type and by having True and False roles that any value can mix in (so you can have a "0 but True" value or a "1 but False" one for example, or a false list containing elements, or a true list that's empty).
See http://www.mail-archive.com/macosx@perl.org/msg09930.html
A: So to combine what I think to be the best of everyone's answers:
When you evaluate a variable in boolean context, its .true() method gets called. The default .true() method used by an object does a Perl 5-style <0, "", undef> check of the object's value, but when you say "but True" or "but False", this method is overridden with one that doesn't look at the value just returns a constant.
One could conceivable write a true() method which, say, returned true when the value was even and false when it was odd.
A: Perl 6 evaluates truth now by asking the object a question instead of looking at its value. The value is not the object. It's something I've liked about other object languages and will be glad to have in Perl: I get to decide how the object responds and can mutate that. As ysth said, you could do that in Perl 5 with overload, but I always feel like I have to wash my hands after doing it that way. :)
If you don't do anything to change that, Perl 6 behaves in the same way as Perl 5 so you get the least amount of surprise.
A: See Synopsis 12: Roles.
The rules are the same, but the "but" copies the 0 and applies a role to the copy that causes it to be true in boolean context.
You can do the same thing with overload in Perl 5.
A: Truthness test just calls the .true method on an object, so the "mix in" operation $stuff but True just (among other things) overrides that method.
This is specified in S02, generally enum types (of which Bool is one) are described in S12.
A: One false value that gets neglected nearly everywhere is "0". I recently made this painful discovery that "0" is false in PERL 5. Gee. A non-empty string that's false. I was really hoping that would change in PERL6, but I guess not.
> if ( "0" ) { say "True" } else { say "False" }
False
The ||= idiom clobbered some strings I really wasn't expecting:
$ perl -e '$x = "0"; $x ||= ""; print ">>$x<<\n";'
>><<
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125265",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
}
|
Q: Chaining Static Methods in PHP? Is it possible to chain static methods together using a static class? Say I wanted to do something like this:
$value = TestClass::toValue(5)::add(3)::subtract(2)::add(8)::result();
. . . and obviously I would want $value to be assigned the number 14. Is this possible?
Update: It doesn't work (you can't return "self" - it's not an instance!), but this is where my thoughts have taken me:
class TestClass {
public static $currentValue;
public static function toValue($value) {
self::$currentValue = $value;
}
public static function add($value) {
self::$currentValue = self::$currentValue + $value;
return self;
}
public static function subtract($value) {
self::$currentValue = self::$currentValue - $value;
return self;
}
public static function result() {
return self::$value;
}
}
After working that out, I think it would just make more sense to simply work with a class instance rather than trying to chain static function calls (which doesn't look possible, unless the above example could be tweaked somehow).
A: This is more accurate, easier, and read-friendly (allows code-completion)
class Calculator
{
public static $value = 0;
protected static $onlyInstance;
protected function __construct ()
{
// disable creation of public instances
}
protected static function getself()
{
if (static::$onlyInstance === null)
{
static::$onlyInstance = new Calculator;
}
return static::$onlyInstance;
}
/**
* add to value
* @param numeric $num
* @return \Calculator
*/
public static function add($num)
{
static::$value += $num;
return static::getself();
}
/**
* substruct
* @param string $num
* @return \Calculator
*/
public static function subtract($num)
{
static::$value -= $num;
return static::getself();
}
/**
* multiple by
* @param string $num
* @return \Calculator
*/
public static function multiple($num)
{
static::$value *= $num;
return static::getself();
}
/**
* devide by
* @param string $num
* @return \Calculator
*/
public static function devide($num)
{
static::$value /= $num;
return static::getself();
}
public static function result()
{
return static::$value;
}
}
Example:
echo Calculator::add(5)
->subtract(2)
->multiple(2.1)
->devide(10)
->result();
result: 0.63
A: I like the solution provided by Camilo above, essentially since all you're doing is altering the value of a static member, and since you do want chaining (even though it's only syntatic sugar), then instantiating TestClass is probably the best way to go.
I'd suggest a Singleton pattern if you want to restrict instantiation of the class:
class TestClass
{
public static $currentValue;
private static $_instance = null;
private function __construct () { }
public static function getInstance ()
{
if (self::$_instance === null) {
self::$_instance = new self;
}
return self::$_instance;
}
public function toValue($value) {
self::$currentValue = $value;
return $this;
}
public function add($value) {
self::$currentValue = self::$currentValue + $value;
return $this;
}
public function subtract($value) {
self::$currentValue = self::$currentValue - $value;
return $this;
}
public function result() {
return self::$currentValue;
}
}
// Example Usage:
$result = TestClass::getInstance ()
->toValue(5)
->add(3)
->subtract(2)
->add(8)
->result();
A: class oop{
public static $val;
public static function add($var){
static::$val+=$var;
return new static;
}
public static function sub($var){
static::$val-=$var;
return new static;
}
public static function out(){
return static::$val;
}
public static function init($var){
static::$val=$var;
return new static;
}
}
echo oop::init(5)->add(2)->out();
A: People are overcomplicating this like crazy.
Check this out:
class OopClass
{
public $first;
public $second;
public $third;
public static function make($first)
{
return new OopClass($first);
}
public function __construct($first)
{
$this->first = $first;
}
public function second($second)
{
$this->second = $second;
return $this;
}
public function third($third)
{
$this->third = $third;
return $this;
}
}
Usage:
OopClass::make('Hello')->second('To')->third('World');
A: Little crazy code on php5.3... just for fun.
namespace chaining;
class chain
{
static public function one()
{return get_called_class();}
static public function two()
{return get_called_class();}
}
${${${${chain::one()} = chain::two()}::one()}::two()}::one();
A: You could always use the First method as a static and the remaining as instance methods:
$value = Math::toValue(5)->add(3)->subtract(2)->add(8)->result();
Or better yet:
$value = Math::eval(Math::value(5)->add(3)->subtract(2)->add(8));
class Math {
public $operation;
public $operationValue;
public $args;
public $allOperations = array();
public function __construct($aOperation, $aValue, $theArgs)
{
$this->operation = $aOperation;
$this->operationValue = $aValue;
$this->args = $theArgs;
}
public static function eval($math) {
if(strcasecmp(get_class($math), "Math") == 0){
$newValue = $math->operationValue;
foreach ($math->allOperations as $operationKey=>$currentOperation) {
switch($currentOperation->operation){
case "add":
$newvalue = $currentOperation->operationValue + $currentOperation->args;
break;
case "subtract":
$newvalue = $currentOperation->operationValue - $currentOperation->args;
break;
}
}
return $newValue;
}
return null;
}
public function add($number){
$math = new Math("add", null, $number);
$this->allOperations[count($this->allOperations)] &= $math;
return $this;
}
public function subtract($number){
$math = new Math("subtract", null, $number);
$this->allOperations[count($this->allOperations)] &= $math;
return $this;
}
public static function value($number){
return new Math("value", $number, null);
}
}
Just an FYI.. I wrote this off the top of my head (right here on the site). So, it may not run, but that is the idea. I could have also did a recursive method call to eval, but I thought this may be simpler. Please let me know if you would like me to elaborate or provide any other help.
A: With php7 you will be able to use desired syntax because of new Uniform Variable Syntax
<?php
abstract class TestClass {
public static $currentValue;
public static function toValue($value) {
self::$currentValue = $value;
return __CLASS__;
}
public static function add($value) {
self::$currentValue = self::$currentValue + $value;
return __CLASS__;
}
public static function subtract($value) {
self::$currentValue = self::$currentValue - $value;
return __CLASS__;
}
public static function result() {
return self::$currentValue;
}
}
$value = TestClass::toValue(5)::add(3)::subtract(2)::add(8)::result();
echo $value;
Demo
A: Technically you can call a static method on an instance like $object::method() in PHP 7+, so returning a new instance should work as a replacement for return self. And indeed it works.
final class TestClass {
public static $currentValue;
public static function toValue($value) {
self::$currentValue = $value;
return new static();
}
public static function add($value) {
self::$currentValue = self::$currentValue + $value;
return new static();
}
public static function subtract($value) {
self::$currentValue = self::$currentValue - $value;
return new static();
}
public static function result() {
return self::$currentValue;
}
}
$value = TestClass::toValue(5)::add(3)::subtract(2)::add(8)::result();
var_dump($value);
Outputs int(14).
This about same as returning __CLASS__ as used in other answer. I rather hope no-one ever decides to actually use these forms of API, but you asked for it.
A: If toValue(x) returns an object, you could do like this:
$value = TestClass::toValue(5)->add(3)->substract(2)->add(8);
Providing that toValue returns a new instance of the object, and each next method mutates it, returning an instance of $this.
A: In a nutshell... no. :) The resolution operator (::) would work for the TetsClass::toValue(5) part, but everything after that will just give a syntax error.
Once namespaces are implemented in 5.3, you can have "chained" :: operators, but all that'll do is drill down through the namespace tree; it won't be possible to have methods in the middle of things like this.
A: The best that can be done
class S
{
public static function __callStatic($name,$args)
{
echo 'called S::'.$name . '( )<p>';
return '_t';
}
}
$_t='S';
${${S::X()}::F()}::C();
A: No, this won't work. The :: operator needs to evaluate back to a class, so after the TestClass::toValue(5) evaluates, the ::add(3) method would only be able to evaluate on the answer of the last one.
So if toValue(5) returned the integer 5, you would basically be calling int(5)::add(3) which obviously is an error.
A: The most easiest way i have ever found for method chaining from new Instance or Static method of class is as below. I have used Late Static Binding here and i really loved this solution.
I have created a utility to send multiple User Notification on next page using tostr in Laravel.
<?php
namespace App\Utils;
use Session;
use Illuminate\Support\HtmlString;
class Toaster
{
private static $options = [
"closeButton" => false,
"debug" => false,
"newestOnTop" => false,
"progressBar" => false,
"positionClass" => "toast-top-right",
"preventDuplicates" => false,
"onclick" => null,
"showDuration" => "3000",
"hideDuration" => "1000",
"timeOut" => "5000",
"extendedTimeOut" => "1000",
"showEasing" => "swing",
"hideEasing" => "linear",
"showMethod" => "fadeIn",
"hideMethod" => "fadeOut"
];
private static $toastType = "success";
private static $instance;
private static $title;
private static $message;
private static $toastTypes = ["success", "info", "warning", "error"];
public function __construct($options = [])
{
self::$options = array_merge(self::$options, $options);
}
public static function setOptions(array $options = [])
{
self::$options = array_merge(self::$options, $options);
return self::getInstance();
}
public static function setOption($option, $value)
{
self::$options[$option] = $value;
return self::getInstance();
}
private static function getInstance()
{
if(empty(self::$instance) || self::$instance === null)
{
self::setInstance();
}
return self::$instance;
}
private static function setInstance()
{
self::$instance = new static();
}
public static function __callStatic($method, $args)
{
if(in_array($method, self::$toastTypes))
{
self::$toastType = $method;
return self::getInstance()->initToast($method, $args);
}
throw new \Exception("Ohh my god. That toast doesn't exists.");
}
public function __call($method, $args)
{
return self::__callStatic($method, $args);
}
private function initToast($method, $params=[])
{
if(count($params)==2)
{
self::$title = $params[0];
self::$message = $params[1];
}
elseif(count($params)==1)
{
self::$title = ucfirst($method);
self::$message = $params[0];
}
$toasters = [];
if(Session::has('toasters'))
{
$toasters = Session::get('toasters');
}
$toast = [
"options" => self::$options,
"type" => self::$toastType,
"title" => self::$title,
"message" => self::$message
];
$toasters[] = $toast;
Session::forget('toasters');
Session::put('toasters', $toasters);
return $this;
}
public static function renderToasters()
{
$toasters = Session::get('toasters');
$string = '';
if(!empty($toasters))
{
$string .= '<script type="application/javascript">';
$string .= "$(function() {\n";
foreach ($toasters as $toast)
{
$string .= "\n toastr.options = " . json_encode($toast['options'], JSON_PRETTY_PRINT) . ";";
$string .= "\n toastr['{$toast['type']}']('{$toast['message']}', '{$toast['title']}');";
}
$string .= "\n});";
$string .= '</script>';
}
Session::forget('toasters');
return new HtmlString($string);
}
}
This will work as below.
Toaster::success("Success Message", "Success Title")
->setOption('showDuration', 5000)
->warning("Warning Message", "Warning Title")
->error("Error Message");
A: Fully functional example of method chaining with static attributes:
<?php
class Response
{
static protected $headers = [];
static protected $http_code = 200;
static protected $http_code_msg = '';
static protected $instance = NULL;
protected function __construct() { }
static function getInstance(){
if(static::$instance == NULL){
static::$instance = new static();
}
return static::$instance;
}
public function addHeaders(array $headers)
{
static::$headers = $headers;
return static::getInstance();
}
public function addHeader(string $header)
{
static::$headers[] = $header;
return static::getInstance();
}
public function code(int $http_code, string $msg = NULL)
{
static::$http_code_msg = $msg;
static::$http_code = $http_code;
return static::getInstance();
}
public function send($data, int $http_code = NULL){
$http_code = $http_code != NULL ? $http_code : static::$http_code;
if ($http_code != NULL)
header(trim("HTTP/1.0 ".$http_code.' '.static::$http_code_msg));
if (is_array($data) || is_object($data))
$data = json_encode($data);
echo $data;
exit();
}
function sendError(string $msg_error, int $http_code = null){
$this->send(['error' => $msg_error], $http_code);
}
}
Example of use:
Response::getInstance()->code(400)->sendError("Lacks id in request");
A: Here's another way without going through a getInstance method (tested on PHP 7.x):
class TestClass
{
private $result = 0;
public function __call($method, $args)
{
return $this->call($method, $args);
}
public static function __callStatic($method, $args)
{
return (new static())->call($method, $args);
}
private function call($method, $args)
{
if (! method_exists($this , '_' . $method)) {
throw new Exception('Call undefined method ' . $method);
}
return $this->{'_' . $method}(...$args);
}
private function _add($num)
{
$this->result += $num;
return $this;
}
private function _subtract($num)
{
$this->result -= $num;
return $this;
}
public function result()
{
return $this->result;
}
}
The class can be used as following:
$res1 = TestClass::add(5)
->add(3)
->subtract(2)
->add(8)
->result();
echo $res1 . PHP_EOL; // 14
$res2 = TestClass::subtract(1)->add(10)->result();
echo $res2 . PHP_EOL; // 9
A: Also works as:
ExampleClass::withBanners()->withoutTranslations()->collection($values)
Using new static(self::class);
public static function withoutTranslations(): self
{
self::$withoutTranslations = true;
return new static(self::class);
}
public static function withBanners(): self
{
return new static(self::class);
}
public static function collection(values): self
{
return $values;
}
A: Use PHP 7! If your web provider cannot --> change provider! Don't lock in past.
final class TestClass {
public static $currentValue;
public static function toValue($value) {
self::$currentValue = $value;
return __CLASS__;
}
public static function add($value) {
self::$currentValue = self::$currentValue + $value;
return __CLASS__;
}
public static function subtract($value) {
self::$currentValue = self::$currentValue - $value;
return __CLASS__;
}
public static function result() {
return self::$currentValue;
}
}
And very simple use:
$value = TestClass::toValue(5)::add(3)::subtract(2)::add(8)::result();
var_dump($value);
Return (or throw error):
int(14)
completed contract.
Rule one: most evolved and maintainable is always better.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125268",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "62"
}
|
Q: How would you handle users who don't read dialog boxes? A recent article on Ars Technica discusses a recent study performed by the Psychology Department of North Carolina State University, that showed users have a tendency to do whatever it takes to get rid of a dialog box to get back to their task at hand. Most of them would click OK or yes, minimize the dialog, or close the dialog, regardless of the message being displayed. Some of the dialog boxes displayed were real, and some of them were fake (like those popups displayed by webpages posing as an antivirus warning). The response times would indicate that those users aren't really reading those dialog boxes.
So, knowing this, how would this effect your design, and what would you try to do about it (if anything)?
A: Often developers use modal dialog boxes simply because its easy to code them.
However, non-modal notifications are often more convenient for the user to deal with.
A: One suggestion:
*
*Don't use dialog boxes. Especially modal, OK/Cancel dialog boxes.
Sometimes that is hard... how do you handle opening a file? Sometimes it's easy... do you really need to warn the user that they are about to overwrite a file? Chances are, if I'm blindly clicking "OK", I'm not going heed any warnings whatsoever.
A: I try to design applications to be robust in the face of accidents -- either slips (inadvertent operations, such as clicking in the wrong place) or mistakes (cognitive errors, such as clicking Ok vs. Cancel on a dialog). Some ways to do this are:
*
*infinite (or at least multi-step) undo / redo
*integrate documentation with the interface, via dynamic tooltips and other context-sensitive means of communication (One paper that is particularly relevant is about 'Surprise, Explain, Reward' (direct link: SER) -- using typical psychological responses to unexpected behavior to inform users)
*Incorporate the state of the system into said documentation (use the current user's data as examples, and make the documentation concrete by using data that they can see right now)
*Expect user error. If there's a chance that someone will try to write to a:\ when there isn't a disk in place, then implement a time-out so the system can fail gracefully, and prompt for another location. Save the data in memory until it's secure on disk, etc.
This boils down to two core things: (1) Program defensively, and (2) Keep the user as well informed as you can. If the system's interface is easy to use, and behaves according to their expectations then they are more likely to know which button to click when an annoying dialog appears.
I also try very, very hard to avoid anything modal, so users can ignore most dialogs I have to use, at least for a while (and when they really need to pay attention to them, they have enough information to know what to do with it).
It's impossible to make a system completely fool-proof, but I've found that the above techniques go a long way in the right direction. (and they have been incorporated in the systems used to develop Surprise Explain Reward and other tools that have been vetted by extensive user studies.)
A: If you must use a dialog, put descriptive captions on the buttons within the dialog.
For example, instead of OK and Cancel buttons, have them say "Send Invoice" and "Go Back", or whatever is appropriate in the context of your dialog.
That way, the text is right under their cursor and they have a good chance of understanding.
Mac OS X does this most of the time. Here's an example image.
Edit:
This is a better image, and I found it at the Apple Human Interface Guideline site, which is a great reference, and very readable. This document on that site is all about Dialogs.
A: Firstly the use of color and icons should help give the user some visual awareness of the severity of the issue, red to convey exceptional, yellow to convey a warning, and white to convey informational.
Secondly the use of verbs on your dialog buttons gives the users a sense of what they are telling the system to do even if they don't read the text of the dialog.
Lastly, if you are interested in looking into a completely different notification paradigm check out the Information Bar or Notification Bar that is implemented in Firefox and Internet Explorer. StackOverflow uses the same type of mechanism to notify users when they have gotten a new badge.
The Information Bar is non-obtrusive and stays at the top of the screen waiting for user attention. I think it's a great design metaphor.
Here are a couple of implementation tutorials:
*
*C#
*JavaScript
Here is Microsoft's guidance on dialog design, it touches on the Information Bar concept as well.
A: Wrong question. "How would you handle users" starts at the wrong end.
The correct question is "Given that dialogs distract users from the task at hand, what better alternatives exist?".
When working to achieve a goal or finish a task, we can distinguish three situations:
(1) The application comes to the conclusion that there is no action it can take which will make the user achieve the goal. Pop up a message, with one button to dismiss it. You don't care if the reader understands it, since the outcome doesn't matter anyway.
(2) There is only one action that you can take, or the alternatives are irrelevant to the user. Don't bother him at all.
(3) There are two or more ways of achieving the goal. Let the user choose between these. Do not formulate this as a yes/no question. (Vista offers this as a common dialog, to replace the message box.) If at all possible, do not make this an irreversible choice.
The exception to this rule is the situation where the user would expect a yes/no question. But really, if that is the case, then why isn't the question part of the normal workflow? Dialog boxes are outside the normal workflow.
A: Immediately Steve Krug's book Don't Make Me Think comes to mind.
In the design of dialog boxes, status messages back to user, etc. it is good to use iconography and color hints as to what the words actually say.
So highlight error messages red, warnings yellow, etc.
A: The Humane Interface, by Jef Raskin is worth reading. A dialog box is the last resort, and a sign of poor design. Most are unnecessary, and as you discovered are all ignored by users.
Why is there a dialog box? Solve that problem - don't ask users to confirm an operation, instead make it easy to undo the operation. Don't popup a dialog box announcing an error - do whatever recovery you're going to do anyway (or whatever is possible). Definitely don't show dialog boxes which have only one outcome ('OK' only boxes are the devil), present the information within the app unobtrusively.
A: A couple of suggestions
*
*Only use boxes when absolutely necessary.
*Always set the default option to the least dangerous option
A: A .NET Rocks episode comes to mind (I believe episode 338, "Mark Miller on the Science of Good UI") discusses this very topic. I think what is key to this whole discussion is that this is basic UI design taken too far. Where the modal was once an acceptable means of communication we now find that it has become programming faux pas. Users understand that 6 times out of 10 the information is not pertinent enough for them to worry about. As a result they treat all modals the same way -- learned helplessness. If a modal comes up and tells me that Application Error X occurred and all I can click is "OK" --even when I don't think it is "OK" I learn a particular behavior. I assosciate modals with the idea that I probably can't do much about them, but if I click OK/Yes then I can get back to what I need.
So, why is it still used? Perhaps developers have tried avoiding the fact that application development is becoming more than just a basic interface, and users require a fluid UI design -- old standbys are hard to give up...
I think the key here is to understand that good UI design now indicates that interruptions (to even the most novice computer user) are annoyances and we need to strive to have a seamless user experience where the focus of the application is the user -- not the needs of the application via prompting and error reporting -- don't allow a user to get into situations where they don't care.
A: One thing you can do is have the OK button disabled for 3 seconds.
Firefox does this when you install an extension.
Edit: All right, some people find this annoying. I still think that about 1 second would be all right. It would suppress the instant-OK-clicking instinct that people (myself included) have, and force a double-take. Of course, even this will annoy people if your dialog is not something that they actually need to read.
A: I think you probably want to read this paper:
"Impact of High-Intensity Negotiated-Style Interruptions on End-User Debugging", T. J. Robertson, Joseph Lawrance, and Margaret Burnett, Journal of Visual Languages and Computing 17(2), 187-202, April 2006.
It asked a similar question. The result was, let the user know you want their attention and then sit back and wait for the user to respond. Don't interrupt the user though, that's not what he or she wants.
A: The [Lightbox](http://en.wikipedia.org/wiki/Lightbox_(JavaScript)) modal dialog seems an effective technique in some cases (web 2.0 derivation, but can be implimented in other contexts).
One other point: if you can forgo a dialog box for an Undo function (Gmail for one championed this concept as standard webapp behaviour) that's something to consider.
A: Lots of good advice above. I just want to add to the book recommendations - Joel Splosky's "User Interface Design for Programmers" book is worth reading:
http://www.amazon.com/User-Interface-Design-Programmers-Spolsky/dp/1893115941/ref=pd_bbs_sr_4?ie=UTF8&s=books&qid=1222233643&sr=8-4
A: If you must use a dialog, reward the user with amusingly sympathetic or even satirical and very short explanatory text. If you occasionally hand out something hilariously scandalous they will read everything.
User is running dangerously low on
common sense. Please remove this user
and insert another one.
A: I have little patience with users who do not read what has taken me a lot of time and effort to develop: 1) the application 2) the instructions Aside from that, if you do not read and just do "whatever it takes" you are on your own. I state that up front. I design my applications to be as intuitive as possible and still there are people who will make support phone calls out of the blue, like a child blurting out in class when they shouldnt be. I have no tolerance for that. Read the manual, read the dialogs - the answers to 99% of the problems are right there.
A: First of all, stupid should hurt, but usually it doesn't so...
The next best thing is including an icon that tries to convey the severity of the issue. Some percentage of those who won't read might change their habit if the dialog's icon seems ominous. Some percentage won't read it regardless.
A: Include a multiple choice quiz at the end of the dialog box, to which the user must select the answer that shows they really did read and understand the text. Randomly switch the order of the choices so they cannot always click the same one.
A: Changing the wording and how the dialog works helps. For example, having OK/Cancel buttons tends to let users ignore most of the dialog. If you remove the normal buttons and replace these with wordier commands links, users are more likely to read each button because the 'quick, go away' option isn't available.
A: I call this the "autopilot" problem.
*
*Do not use the OK, Cancel buttons on the bottom of the screen. Look at the way that Vista tries to force users to make a real decision.
*Disable the buttons for a few seconds, display a "Time to think" timer/progressbar. So the user cannot click on autopilot. Users tend to find this very annoying.
A: Don't use confirmation (Are you sure? Yes/No), but use Undo.
Don't pop-up warnings that will are blocking, because users will try to get working again as fast as possible, disregarding the message and just clicking it away. Use something like the information bar from Internet Explorer, which is not blocking.
A: You can avoid using dialog boxes alltogether! In some programs there's a minibuffer that shows errors and warnings. Along that, it may also ask you a thing, where you must type what you want to do. It is quite clean and nice solution, I tend to prefer it over a menubar.
But if you really must use dialog boxes, try this:
*
*One sentence per dialog only
*At most two or three buttons
*Make the text readable inside the dialog (bigger, black-on-white)
*Rather use one dialog than many small repeatedly (tip: listbox)
What do I think about dialog boxes? In nutshell: They are dumb and stupid things. Programs that use them get on my way and slows me down with their stupid meaningless questions. Also, often programs that use dialog boxes are rather dumb.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125269",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29"
}
|
Q: What's the easiest way to commit and push a single file while leaving other modifications alone? I'm relatively new to Mercurial and my team is trying it out right now as a replacement for Subversion.
How can I commit and push a single file out to another repository while leaving other modifications in my working directory uncommitted (or at least not pushed to the other repository)?
This happens for us with database migrations. We want to commit the migration to source control so a DBA can view and edit it while we're working on the code modifications to go along with that database migration. The changes aren't yet ready to go so we don't want to push all of them out.
In subversion, I'd simply do:
svn add my_migration.sql
# commit only the migration, but not the other files I'm working on
svn commit -m "migration notes" my_mygration.sql
and continue working locally.
This doesn't work with mercurial as when I'm pushing it out to the other repository, if there are changes to it that I haven't pulled down, it wants me to pull them down, merge them, and commit that merge to the repository. Commits after a merge don't allow you to omit files so it forces you to commit everything in your local repository.
The easiest thing that I can figure out is to commit the file to my local repository, clone my local repository, fetch any new changes from the actual repository, merge them and commit that merge, and them push my changes out.
hg add my_migration.sql
hg commit -m "migration notes" my_migration.sql
cd ..
hg clone project project-clone
cd project-clone
hg fetch http://hg/project
hg push http://hg/project
This works, but it feels like I'm missing something easier, some way to tell mercurial to ignore the files already in my working directory, just do the merge and send the files along. I suspect mercurial queues can do this, but I don't fully grok mq yet.
A: It's been almost 2 years since I originally posed this question. I'd do it differently now (as I mentioned in a comment above the question above). What I'd do now would be to instead commit my changes to the one file in my local repo (you can use the hg record extension to only commit pieces of a file):
hg commit -m "commit message" filename
Then just push out.
hg push
If there's a conflict because other changes have been made to the repo that I need to merge first, I'd update to the parent revision (seen with "hg parents -r ." if you don't know what it is), commit my other changes there so I've got 2 heads. Then revert back to the original single file commit and pull/merge the changes into that version. Then push out the changes with
hg push --rev .
To push out only the single file and the merge of that revision. Then you can merge the two heads you've got locally.
This way gets rid of the mq stuff and the potential for rejected hunks and keeps everything tracked by source control. You can also "hg strip" revisions off if you later decide you don't want them.
A: There's a Mercurial feature that implements shelve and unshelve commands, which give you an interactive way to specify changes to store away until a later time: Shelve.
Then you can hg shelve and hg unshelve to temporarily store changes away. It lets you work at the "patch hunk" level to pick and choose the items to shelve away. It didn't appear to shelve a file had listed for adding, only files already in the repo with modifications.
It is included with Mercurial as an "extension" which just means you have to enable it in your hg config file.
Notes for really old versions of Mercurial (before shelve was included -- this is no longer necessary):
I didn't see any great install instructions with some googling, so here is the combined stuff I used to get it working:
Get it with:
hg clone http://freehg.org/u/tksoh/hgshelve/ hgshelve
The only file (currently) in the project is the hgshelve.py file.
Modify your ~/.hgrc to add the shelve extension, pointing to where you cloned the repo:
[extensions]
hgshelve=/Users/ted/Documents/workspace/hgshelve/hgshelve.py
A: Another option if you don't want to rely on extensions is to keep a clone of your upstream repository locally that you only use for these sorts of integration tasks.
In your example, you could simply pull/merge your change into the integration/upstream repository and push it directly up to the remote server.
A: What i use generally is use to commit a single file :
hg commit -m "commit message" filename
In case later, I have a merge conflict and I am still not ready to commit my changes,follow these steps:
1) Create a patch file.
hg diff > changes.patch
2) Revert all your outstanding uncommitted changes, only after checking your patch file.
hg revert --all
3) Pull,update and merge to latest revision
hg pull -u
hg merge
hg commit -m "local merge"
4) Now simply import your patch back and get your changes.
hg import --no-commit changes.patch
Remember to use -no-commit flag from auto committing the changes.
A: tl;dr: My original explanation looks complicated, but I hope it fully explains how to use a patch queue. Here's the short version:
$ hg qnew -m "migration notes" -f migration my_migration.sql
$ hg qnew -f working-code
# make some changes to your code
$ hg qrefresh # update the patch with the changes you just made
$ hg qfinish -a # turn all the applied patches into normal hg commits
Mercurial Queues makes this sort of thing a breeze, and it makes more complex manipulation of changesets possible. It's worth learning.
In this situation first you'd probably want to save what's in your current directory before pulling down the changes:
# create a patch called migration containing your migration
$ hg qnew -m "migration notes" -f migration.patch my_migration.sql
$ hg qseries -v # the current state of the patch queue, A means applied
0 A migration.patch
$ hg qnew -f working-code.patch # put the rest of the code in a patch
$ hg qseries -v
0 A migration.patch
1 A working-code.patch
Now let's do some additional work on the working code. I'm going to keep doing qseries just to be explicit, but once you build up a mental model of patch queues, you won't have to keep looking at the list.
$ hg qtop # show the patch we're currently editing
working-code.patch
$ ...hack, hack, hack...
$ hg diff # show the changes that have not been incorporated into the patch
blah, blah
$ hg qrefresh # update the patch with the changes you just made
$ hg qdiff # show the top patch's diff
Because all your work is saved in the patch queue now, you can unapply those changes and restore them after you've pulled in the remote changes. Normally to unapply all patches, just do hg qpop -a. Just to show the effect upon the patch queue I'll pop them off one at a time.
$ hg qpop # unapply the top patch, U means unapplied
$ hg qseries -v
0 A migration.patch
1 U working-code.patch
$ hg qtop
migration.patch
$ hg qpop
$ hg qseries -v
0 U migration.patch
1 U working-code.patch
At this point, it's as if there are no changes in your directory. Do the hg fetch. Now you can push your patch queue changes back on, and merge them if there are any conflicts. This is conceptually somewhat similar to git's rebase.
$ hg qpush # put the first patch back on
$ hg qseries -v
0 A migration.patch
1 U working-code.patch
$ hg qfinish -a # turn all the applied patches into normal hg commits
$ hg qseries -v
0 U working-code.patch
$ hg out
migration.patch commit info... blah, blah
$ hg push # push out your changes
At this point, you've pushed out the migration while keeping your other local changes. Your other changes are in an patch in the queue. I do most of my personal development using a patch queue to help me structure my changes better. If you want to get rid of the patch queue and go back to a normal style you'll have to export your changes and reimport them in "normal" mercurial.
$ hg qpush
$ hg qseries -v
0 A working-code.patch
$ hg export qtip > temp.diff
$ rm -r .hg/patches # get rid of mq from the repository entirely
$ hg import --no-commit temp.diff # apply the changes to the working directory
$ rm temp.diff
I'm hugely addicted to patch queues for development and mq is one of the nicest implementations out there. The ability to craft several changes simultaneously really does improve how focused and clean your commits are. It takes a while to get used to, but it goes incredibly well with a DVCS workflow.
A: Since you said easiest, I often use hg commit -i (--interactive) even when committing whole files. With --interactive you can just select the file(s) you want rather than typing their entire path(s) on the command line. As an added bonus you can even selectively include/exclude chunks within the files.
And then just hg push to push that newly created commit.
I put more details on using hg commit --interactive in this answer: https://stackoverflow.com/a/47931672/255961
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125272",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "72"
}
|
Q: How do I remove the file suffix and path portion from a path string in Bash? Given a string file path such as /foo/fizzbuzz.bar, how would I use bash to extract just the fizzbuzz portion of said string?
A: Here's how to do it with the # and % operators in Bash.
$ x="/foo/fizzbuzz.bar"
$ y=${x%.bar}
$ echo ${y##*/}
fizzbuzz
${x%.bar} could also be ${x%.*} to remove everything after a dot or ${x%%.*} to remove everything after the first dot.
Example:
$ x="/foo/fizzbuzz.bar.quux"
$ y=${x%.*}
$ echo $y
/foo/fizzbuzz.bar
$ y=${x%%.*}
$ echo $y
/foo/fizzbuzz
Documentation can be found in the Bash manual. Look for ${parameter%word} and ${parameter%%word} trailing portion matching section.
A: Using basename assumes that you know what the file extension is, doesn't it?
And I believe that the various regular expression suggestions don't cope with a filename containing more than one "."
The following seems to cope with double dots. Oh, and filenames that contain a "/" themselves (just for kicks)
To paraphrase Pascal, "Sorry this script is so long. I didn't have time to make it shorter"
#!/usr/bin/perl
$fullname = $ARGV[0];
($path,$name) = $fullname =~ /^(.*[^\\]\/)*(.*)$/;
($basename,$extension) = $name =~ /^(.*)(\.[^.]*)$/;
print $basename . "\n";
A: Pure bash, done in two separate operations:
*
*Remove the path from a path-string:
path=/foo/bar/bim/baz/file.gif
file=${path##*/}
#$file is now 'file.gif'
*Remove the extension from a path-string:
base=${file%.*}
#${base} is now 'file'.
A: In addition to the POSIX conformant syntax used in this answer,
basename string [suffix]
as in
basename /foo/fizzbuzz.bar .bar
GNU basename supports another syntax:
basename -s .bar /foo/fizzbuzz.bar
with the same result. The difference and advantage is that -s implies -a, which supports multiple arguments:
$ basename -s .bar /foo/fizzbuzz.bar /baz/foobar.bar
fizzbuzz
foobar
This can even be made filename-safe by separating the output with NUL bytes using the -z option, for example for these files containing blanks, newlines and glob characters (quoted by ls):
$ ls has*
'has'$'\n''newline.bar' 'has space.bar' 'has*.bar'
Reading into an array:
$ readarray -d $'\0' arr < <(basename -zs .bar has*)
$ declare -p arr
declare -a arr=([0]=$'has\nnewline' [1]="has space" [2]="has*")
readarray -d requires Bash 4.4 or newer. For older versions, we have to loop:
while IFS= read -r -d '' fname; do arr+=("$fname"); done < <(basename -zs .bar has*)
A: look at the basename command:
NAME="$(basename /foo/fizzbuzz.bar .bar)"
instructs it to remove the suffix .bar, results in NAME=fizzbuzz
A: perl -pe 's/\..*$//;s{^.*/}{}'
A: If you can't use basename as suggested in other posts, you can always use sed. Here is an (ugly) example. It isn't the greatest, but it works by extracting the wanted string and replacing the input with the wanted string.
echo '/foo/fizzbuzz.bar' | sed 's|.*\/\([^\.]*\)\(\..*\)$|\1|g'
Which will get you the output
fizzbuzz
A: Using basename I used the following to achieve this:
for file in *; do
ext=${file##*.}
fname=`basename $file $ext`
# Do things with $fname
done;
This requires no a priori knowledge of the file extension and works even when you have a filename that has dots in it's filename (in front of it's extension); it does require the program basename though, but this is part of the GNU coreutils so it should ship with any distro.
A: Beware of the suggested perl solution: it removes anything after the first dot.
$ echo some.file.with.dots | perl -pe 's/\..*$//;s{^.*/}{}'
some
If you want to do it with perl, this works:
$ echo some.file.with.dots | perl -pe 's/(.*)\..*$/$1/;s{^.*/}{}'
some.file.with
But if you are using Bash, the solutions with y=${x%.*} (or basename "$x" .ext if you know the extension) are much simpler.
A: The basename and dirname functions are what you're after:
mystring=/foo/fizzbuzz.bar
echo basename: $(basename "${mystring}")
echo basename + remove .bar: $(basename "${mystring}" .bar)
echo dirname: $(dirname "${mystring}")
Has output:
basename: fizzbuzz.bar
basename + remove .bar: fizzbuzz
dirname: /foo
A: Pure bash way:
~$ x="/foo/bar/fizzbuzz.bar.quux.zoom";
~$ y=${x/\/*\//};
~$ echo ${y/.*/};
fizzbuzz
This functionality is explained on man bash under "Parameter Expansion". Non bash ways abound: awk, perl, sed and so on.
EDIT: Works with dots in file suffixes and doesn't need to know the suffix (extension), but doesn’t work with dots in the name itself.
A: The basename does that, removes the path. It will also remove the suffix if given and if it matches the suffix of the file but you would need to know the suffix to give to the command. Otherwise you can use mv and figure out what the new name should be some other way.
A: Combining the top-rated answer with the second-top-rated answer to get the filename without the full path:
$ x="/foo/fizzbuzz.bar.quux"
$ y=(`basename ${x%%.*}`)
$ echo $y
fizzbuzz
A: If you want to keep just the filename with extension and strip the file path
$ x="myfile/hello/foo/fizzbuzz.bar"
$ echo ${x##*/}
$ fizzbuzz.bar
Explanation in Bash manual, see ${parameter##word}
A: You can use
mv *<PATTERN>.jar "$(basename *<PATTERN>.jar <PATTERN>.jar).jar"
For e.g:- I wanted to remove -SNAPSHOT from my file name. For that used below command
mv *-SNAPSHOT.jar "$(basename *-SNAPSHOT.jar -SNAPSHOT.jar).jar"
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125281",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "497"
}
|
Q: How can I upload a photo to a server with the iPhone? I'm writing an iPhone app that takes a photo and then uploads it to a server. How do I upload a photo to a server with Cocoa? I suppose I use NSUrl somewhere.
Thanks!
A: Hope following code snippet will work for you. I have used NSDateFormatter and NSDate to get unique name each time for the image.
Note: 'ImageUploadURL' is the string #define with the base url, you need to replace it with your server url.
//Date format for Image Name...
NSDateFormatter *format = [[NSDateFormatter alloc] init];
[format setDateFormat:@"yyyyMMddHHmmss"];
NSDate *now = [[NSDate alloc] init];
NSString *imageName = [NSString stringWithFormat:@"Image_%@", [format stringFromDate:now]];
[now release];
[format release];
NSMutableURLRequest *request = [[[NSMutableURLRequest alloc] init] autorelease];
[request setURL:[NSURL URLWithString:ImageUploadURL]];
[request setHTTPMethod:@"POST"];
/*
Set Header and content type of your request.
*/
NSString *boundary = [NSString stringWithString:@"---------------------------Boundary Line---------------------------"];
NSString *contentType = [NSString stringWithFormat:@"multipart/form-data; boundary=%@",boundary];
[request addValue:contentType forHTTPHeaderField: @"Content-Type"];
/*
now lets create the body of the request.
*/
NSMutableData *body = [NSMutableData data];
[body appendData:[[NSString stringWithFormat:@"\r\n--%@\r\n",boundary] dataUsingEncoding:NSUTF8StringEncoding]];
[body appendData:[[NSString stringWithFormat:@"Content-Disposition: form-data; name=\"userfile\"; filename=\"%@.jpg\"\r\n", imageName] dataUsingEncoding:NSUTF8StringEncoding]];
[body appendData:[[NSString stringWithString:@"Content-Type: application/octet-stream\r\n\r\n"] dataUsingEncoding:NSUTF8StringEncoding]];
[body appendData:[NSData dataWithData:UIImageJPEGRepresentation(image, 90)]];
[body appendData:[[NSString stringWithFormat:@"\r\n--%@--\r\n",boundary] dataUsingEncoding:NSUTF8StringEncoding]];
[body appendData:[[NSString stringWithFormat:@"geotag=%@&", [self _currentLocationMetadata]] dataUsingEncoding:NSUTF8StringEncoding]];
[body appendData:[[NSString stringWithFormat:@"\r\n--%@--\r\n",boundary] dataUsingEncoding:NSUTF8StringEncoding]];
// set body with request.
[request setHTTPBody:body];
[request addValue:[NSString stringWithFormat:@"%d", [body length]] forHTTPHeaderField:@"Content-Length"];
// now lets make the connection to the web
[NSURLConnection sendSynchronousRequest:request returningResponse:nil error:nil];
A: Header:
@interface EPUploader : NSObject {
NSURL *serverURL;
NSString *filePath;
id delegate;
SEL doneSelector;
SEL errorSelector;
BOOL uploadDidSucceed;
}
- (id)initWithURL: (NSURL *)serverURL
filePath: (NSString *)filePath
delegate: (id)delegate
doneSelector: (SEL)doneSelector
errorSelector: (SEL)errorSelector;
- (NSString *)filePath;
@end
Main:
#import "EPUploader.h"
#import <zlib.h>
static NSString * const BOUNDRY = @"0xKhTmLbOuNdArY";
static NSString * const FORM_FLE_INPUT = @"uploaded";
#define ASSERT(x) NSAssert(x, @"")
@interface EPUploader (Private)
- (void)upload;
- (NSURLRequest *)postRequestWithURL: (NSURL *)url
boundry: (NSString *)boundry
data: (NSData *)data;
- (NSData *)compress: (NSData *)data;
- (void)uploadSucceeded: (BOOL)success;
- (void)connectionDidFinishLoading:(NSURLConnection *)connection;
@end
@implementation EPUploader
/*
*-----------------------------------------------------------------------------
*
* -[Uploader initWithURL:filePath:delegate:doneSelector:errorSelector:] --
*
* Initializer. Kicks off the upload. Note that upload will happen on a
* separate thread.
*
* Results:
* An instance of Uploader.
*
* Side effects:
* None
*
*-----------------------------------------------------------------------------
*/
- (id)initWithURL: (NSURL *)aServerURL // IN
filePath: (NSString *)aFilePath // IN
delegate: (id)aDelegate // IN
doneSelector: (SEL)aDoneSelector // IN
errorSelector: (SEL)anErrorSelector // IN
{
if ((self = [super init])) {
ASSERT(aServerURL);
ASSERT(aFilePath);
ASSERT(aDelegate);
ASSERT(aDoneSelector);
ASSERT(anErrorSelector);
serverURL = [aServerURL retain];
filePath = [aFilePath retain];
delegate = [aDelegate retain];
doneSelector = aDoneSelector;
errorSelector = anErrorSelector;
[self upload];
}
return self;
}
/*
*-----------------------------------------------------------------------------
*
* -[Uploader dealloc] --
*
* Destructor.
*
* Results:
* None
*
* Side effects:
* None
*
*-----------------------------------------------------------------------------
*/
- (void)dealloc
{
[serverURL release];
serverURL = nil;
[filePath release];
filePath = nil;
[delegate release];
delegate = nil;
doneSelector = NULL;
errorSelector = NULL;
[super dealloc];
}
/*
*-----------------------------------------------------------------------------
*
* -[Uploader filePath] --
*
* Gets the path of the file this object is uploading.
*
* Results:
* Path to the upload file.
*
* Side effects:
* None
*
*-----------------------------------------------------------------------------
*/
- (NSString *)filePath
{
return filePath;
}
@end // Uploader
@implementation EPUploader (Private)
/*
*-----------------------------------------------------------------------------
*
* -[Uploader(Private) upload] --
*
* Uploads the given file. The file is compressed before beign uploaded.
* The data is uploaded using an HTTP POST command.
*
* Results:
* None
*
* Side effects:
* None
*
*-----------------------------------------------------------------------------
*/
- (void)upload
{
NSData *data = [NSData dataWithContentsOfFile:filePath];
ASSERT(data);
if (!data) {
[self uploadSucceeded:NO];
return;
}
if ([data length] == 0) {
// There's no data, treat this the same as no file.
[self uploadSucceeded:YES];
return;
}
// NSData *compressedData = [self compress:data];
// ASSERT(compressedData && [compressedData length] != 0);
// if (!compressedData || [compressedData length] == 0) {
// [self uploadSucceeded:NO];
// return;
// }
NSURLRequest *urlRequest = [self postRequestWithURL:serverURL
boundry:BOUNDRY
data:data];
if (!urlRequest) {
[self uploadSucceeded:NO];
return;
}
NSURLConnection * connection =
[[NSURLConnection alloc] initWithRequest:urlRequest delegate:self];
if (!connection) {
[self uploadSucceeded:NO];
}
// Now wait for the URL connection to call us back.
}
/*
*-----------------------------------------------------------------------------
*
* -[Uploader(Private) postRequestWithURL:boundry:data:] --
*
* Creates a HTML POST request.
*
* Results:
* The HTML POST request.
*
* Side effects:
* None
*
*-----------------------------------------------------------------------------
*/
- (NSURLRequest *)postRequestWithURL: (NSURL *)url // IN
boundry: (NSString *)boundry // IN
data: (NSData *)data // IN
{
// from http://www.cocoadev.com/index.pl?HTTPFileUpload
NSMutableURLRequest *urlRequest =
[NSMutableURLRequest requestWithURL:url];
[urlRequest setHTTPMethod:@"POST"];
[urlRequest setValue:
[NSString stringWithFormat:@"multipart/form-data; boundary=%@", boundry]
forHTTPHeaderField:@"Content-Type"];
NSMutableData *postData =
[NSMutableData dataWithCapacity:[data length] + 512];
[postData appendData:
[[NSString stringWithFormat:@"--%@\r\n", boundry] dataUsingEncoding:NSUTF8StringEncoding]];
[postData appendData:
[[NSString stringWithFormat:
@"Content-Disposition: form-data; name=\"%@\"; filename=\"file.bin\"\r\n\r\n", FORM_FLE_INPUT]
dataUsingEncoding:NSUTF8StringEncoding]];
[postData appendData:data];
[postData appendData:
[[NSString stringWithFormat:@"\r\n--%@--\r\n", boundry] dataUsingEncoding:NSUTF8StringEncoding]];
[urlRequest setHTTPBody:postData];
return urlRequest;
}
/*
*-----------------------------------------------------------------------------
*
* -[Uploader(Private) compress:] --
*
* Uses zlib to compress the given data.
*
* Results:
* The compressed data as a NSData object.
*
* Side effects:
* None
*
*-----------------------------------------------------------------------------
*/
- (NSData *)compress: (NSData *)data // IN
{
if (!data || [data length] == 0)
return nil;
// zlib compress doc says destSize must be 1% + 12 bytes greater than source.
uLong destSize = [data length] * 1.001 + 12;
NSMutableData *destData = [NSMutableData dataWithLength:destSize];
int error = compress([destData mutableBytes],
&destSize,
[data bytes],
[data length]);
if (error != Z_OK) {
NSLog(@"%s: self:0x%p, zlib error on compress:%d\n",__func__, self, error);
return nil;
}
[destData setLength:destSize];
return destData;
}
/*
*-----------------------------------------------------------------------------
*
* -[Uploader(Private) uploadSucceeded:] --
*
* Used to notify the delegate that the upload did or did not succeed.
*
* Results:
* None
*
* Side effects:
* None
*
*-----------------------------------------------------------------------------
*/
- (void)uploadSucceeded: (BOOL)success // IN
{
[delegate performSelector:success ? doneSelector : errorSelector
withObject:self];
}
/*
*-----------------------------------------------------------------------------
*
* -[Uploader(Private) connectionDidFinishLoading:] --
*
* Called when the upload is complete. We judge the success of the upload
* based on the reply we get from the server.
*
* Results:
* None
*
* Side effects:
* None
*
*-----------------------------------------------------------------------------
*/
- (void)connectionDidFinishLoading:(NSURLConnection *)connection // IN
{
NSLog(@"%s: self:0x%p\n", __func__, self);
[connection release];
[self uploadSucceeded:uploadDidSucceed];
}
/*
*-----------------------------------------------------------------------------
*
* -[Uploader(Private) connection:didFailWithError:] --
*
* Called when the upload failed (probably due to a lack of network
* connection).
*
* Results:
* None
*
* Side effects:
* None
*
*-----------------------------------------------------------------------------
*/
- (void)connection:(NSURLConnection *)connection // IN
didFailWithError:(NSError *)error // IN
{
NSLog(@"%s: self:0x%p, connection error:%s\n",
__func__, self, [[error description] UTF8String]);
[connection release];
[self uploadSucceeded:NO];
}
/*
*-----------------------------------------------------------------------------
*
* -[Uploader(Private) connection:didReceiveResponse:] --
*
* Called as we get responses from the server.
*
* Results:
* None
*
* Side effects:
* None
*
*-----------------------------------------------------------------------------
*/
-(void) connection:(NSURLConnection *)connection // IN
didReceiveResponse:(NSURLResponse *)response // IN
{
NSLog(@"%s: self:0x%p\n", __func__, self);
}
/*
*-----------------------------------------------------------------------------
*
* -[Uploader(Private) connection:didReceiveData:] --
*
* Called when we have data from the server. We expect the server to reply
* with a "YES" if the upload succeeded or "NO" if it did not.
*
* Results:
* None
*
* Side effects:
* None
*
*-----------------------------------------------------------------------------
*/
- (void)connection:(NSURLConnection *)connection // IN
didReceiveData:(NSData *)data // IN
{
NSLog(@"%s: self:0x%p\n", __func__, self);
NSString *reply = [[[NSString alloc] initWithData:data
encoding:NSUTF8StringEncoding]
autorelease];
NSLog(@"%s: data: %s\n", __func__, [reply UTF8String]);
if ([reply hasPrefix:@"YES"]) {
uploadDidSucceed = YES;
}
}
@end
Usage:
[[EPUploader alloc] initWithURL:[NSURL URLWithString:@"http://yourserver.com/uploadDB.php"]
filePath:@"path/to/some/file"
delegate:self
doneSelector:@selector(onUploadDone:)
errorSelector:@selector(onUploadError:)];
A: Create an NSURLRequest and then use NSURLConnection to send it off to your server.
A: Looks like Three20 library has support for POSTing images to HTTP server.
See TTURLRequest.m
And it's released under Apache 2.0 License.
A: - (void) uploadImage :(NSString *) strRequest
{
if([appdel checkNetwork]==TRUE)
{
NSString *urlString =[NSString stringWithFormat:@"Enter Url........."];
NSLog(@"Upload %@",urlString);
// setting up the request object now
isUploadImage=TRUE;
totalsize=[[strRequest dataUsingEncoding:NSUTF8StringEncoding]length];
NSMutableURLRequest *request = [[[NSMutableURLRequest alloc] init] autorelease];
[request setURL:[NSURL URLWithString:urlString]];
[request setHTTPMethod:@"POST"];
NSString *boundary = [NSString stringWithString:@"_1_19330907_1317415362628"];
NSString *contentType = [NSString stringWithFormat:@"multipart/mixed; boundary=%@",boundary];
[request setValue:contentType forHTTPHeaderField: @"Content-Type"];
[request setValue:@"text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2" forHTTPHeaderField:@"Accept"];
[request setValue:@"2667" forHTTPHeaderField:@"Content-Length"];
/*
now lets create the body of the post
*/
NSMutableData *body = [NSMutableData data];
[body appendData:[[NSString stringWithFormat:@"\r\n--%@\r\n",boundary] dataUsingEncoding:NSUTF8StringEncoding]];
[body appendData:[[NSString stringWithString:@"Content-Type: application/json\r\n\r\n"] dataUsingEncoding:NSUTF8StringEncoding]];
//[body appendData:[NSData dataWithData:imageData]];
[body appendData:[strRequest dataUsingEncoding:NSUTF8StringEncoding]];
[body appendData:[[NSString stringWithFormat:@"\r\n--%@--\r\n",boundary] dataUsingEncoding:NSUTF8StringEncoding]];
// setting the body of the post to the reqeust
[request setHTTPBody:body];
theConnection=[[NSURLConnection alloc] initWithRequest:request delegate:self];
if (theConnection)
webData = [[NSMutableData data] retain];
else
NSLog(@"No Connection");
}
}
A: If you want to upload multiple images then this demo is good option.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125306",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "74"
}
|
Q: How can I associate a scriptable Mozilla plugin instance with its NObject? I'm running into a problem associating an invoked method in a plugin I'm writing with the appropriate plugin instance. The documentation at http://developer.mozilla.org/en/Gecko_Plugin_API_Reference/Scripting_plugins doesn't give enough information to be truly useful on this.
In a nutshell, I'm trying to understand just which scriptable object the plugin is expected to return in response to a call to NPP_GetValue with the variable argument equal to NPPpluginScriptableNPObject. I'm guessing that there should be an NPObject instance for each instance of the plugin, but how is the invoke() method in the NPClass supposed to find the plugin instance (NPP) from the scriptable NPObject it's given as an argument? I suppose I could implement a lookup table to do that, but I have the feeling there's something obvious I'm missing.
I'm storing a pointer to an instance of a C++ class (the instance implements the functionality of the plugin) in the pdata member of the NPP, in NPP_New().
A: I guess I'm answering my own question...
The solution I found (and I would still appreciate comments on its validity, especially if you think there's a better way of doing it) was to allocate an NPObject derived structure which has a pointer to my implementation class in the allocate() function I expose to Firefox from my plugin. I then store a pointer to that NPObject in the NPP's pdata member, in NPP_New().
In invoke(), I cast the NPObject pointer I get to the derived structure's additional members, so that I can get a pointer to the instance of the implementation class.
That, as far as I can tell, is the intent of the design - NPObject objects are instances of the NPClass they point to, they implement methods and properties through the NPClass function pointers that deal with these entities, and any private data is expected to be allocated and deallocated by the implementation, and its format is unspecified.
It would look something like this:
static NPClass refObject = {
NP_CLASS_STRUCT_VERSION,
My_Allocate,
My_Deallocate,
NULL,
My_HasMethod,
My_Invoke,
My_InvokeDefault,
My_HasProperty,
My_GetProperty,
NULL,
NULL,
};
class MyImplClass {
// Implementation goes here
};
struct MyNPObject : public NPObject {
MyImplClass *my_impl_instance;
};
// This is just a bit of memory management - Mozilla wants us to allocate our own memory:
NPObject *My_Allocate(NPP inst, NPClass *)
{
// We initialize the structure in NPP_New() below
return (NPObject *)malloc(sizeof(MyNPObject));
}
NPError NPP_New( NPMIMEType pluginType, NPP instance, uint16 mode, int16 argc,
char* argn[], char* argv[], NPSavedData* saved )
{
NPObject *scriptable_object = npnfuncs->createobject(instance, &refObject);
npnfuncs->retainobject(scriptable_object);
MyImplClass *new_player = new MyImplClass();
instance->pdata = scriptable_object;
((MyNPObject*)instance->pdata)->my_impl_instance = new_player;
return NPERR_NO_ERROR;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125308",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to launch from Eclipse in Low priority under Windows? I'm running programs from Eclipse (on Windows) that eat a lot of CPU time. To avoid bogging down my whole machine, I set the priority to Low with the Task Manager. However, this is a cumbersome manual process. Is there a way Eclipse can set this priority automatically?
EDIT: I realized that each particular launcher (Java, Python etc) has its own configuration method, so I will restrict this question to the Java domain, which is what I need most.
A: I have the same problem on Windows--- launching subprocesses that use all the CPUs (thread-parallel jobs), yet I want good responsiveness in my Windows development environment.
Solution: after launching several jobs:
DOS cmd>> wmic process where name="javaw.exe" CALL setpriority "below normal"
No, this won't affect eclipse.exe process.
Java Solution: Insert this code into your CPU-intense programs to lower their own Windows priority:
public static void lowerMyProcessPriority() throws IOException {
String pid = ManagementFactory.getRuntimeMXBean().getName();
int p = pid.indexOf("@");
if (p > 0) pid = pid.substring(0,p);
String cmd = "wmic process where processid=<pid> CALL setpriority".replace("<pid>", pid);
List<String> ls = new ArrayList<>(Arrays.asList(cmd.split(" ")));
ls.add("\"below normal\"");
ProcessBuilder pb = new ProcessBuilder(ls);
pb.start();
}
Yes, tested. Works on Win7.
A: I'm assuming that you launch these programs using External Tools. If so, then you can modify the launch command to use the start /low hack described earlier. However, if these applications have a special launch type (like Java Application or similar), then you're in trouble. The only way you could actually change this would be to crack open the source for Eclipse, find that launch type and where it dispatches tasks and then modify it to use start /low. Sorry, but I don't think there's a simple solution to this.
A: A better alternative is configure the amount of memory that Eclipse will use: http://www.eclipsezone.com/eclipse/forums/t61618.html
And do a google search about -Xmx and -Xms parameters for JVM (which you could configure for runners inside Eclipse).
Kind Regards
A: I spent some time a while back to investigate this. There is no method inside Java to lower the priority (as far as I could see), so you need to employ the "start /min" approach. I didn't try to get that working.
Instead I got a multi-kernel processor. This gives room for other stuff, even if a Java program runs amok on one kernel.
Strongly recommended.
A: I would like this as well, odd that it is not possible, really. I know you can set thread-priorities, but I think in windows-land, threads are all scheduled "inside" the process priority so to speak.
A: I've run into the same question in Linux and have found this:
http://tech.stolsvik.com/2010/01/linux-java-thread-priorities-workaround.html
The interesting bit is about how to convert Java thread priorities to OS priorities. This has solved the problem for me, it may help you too.
A: Use an OS tool or a fake java.exe to capture the really long javaw.exe command that the eclipse launcher spawns.
Bypass eclipse.exe and launch the javaw.exe directly.
Once you have the javaw.exe launching directly and correctly.
START /BELOWNORMAL \path\javaw.exe lots-of-parameters-to-load-workspace
A: If you need it only in development you can set the priority of all other processes to high...
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125313",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Selenium Remote Control HTML Source Extraction in Internet Explorer Selenium Remote Control has a method of "get_html_source", which returns the source of the current page as a string.
AFAIK, this method works in all cases in Firefox and Safari. But when it's invoked in Internet Explorer, it returns an incorrect source.
Does anyone know if this is a bug with Selenium or Internet Explorer, and if there's a fix?
A: I'm 99% sure get_html_source uses the browser's innerHTML property. InnerHTML returns the browser's internal representation of a document, and has always been inconsistent and "wonky" between platforms.
You can test this by temporarily adding the following onload attribute to the body tag of your page.
onload="var oArea = document.createElement('textarea');oArea.rows=80;oArea.cols=80;oArea.value = document.getElementsByTagName('html')[0].innerHTML;document.getElementsByTagName('body')[0].appendChild(oArea)"
This will add a text area to the bottom of your page with the document's innerHTML. If you see the same "incorrect" HTML source you know IE's the culprit here.
Possible workarounds would be running the source through HTML Tidy or some other cleaner if you're after valid markup. I don't know of anything that will give you a consistent rendering between browsers.
A: thanks Alan. it turns out it was a problem with the different browser's implementation of innerHTML.
for tags having to do with lists, like , the end tags are optional.
browsers like safari and firefox pick up on the end tags with their respective innerHTML methods, but internet explorer's innerHTML method ignores them.
since lists are structured, e.g.
apple
pear
a regex replace on the html source string should do the trick.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125314",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Should 'using' directives be inside or outside the namespace in C#? I have been running StyleCop over some C# code, and it keeps reporting that my using directives should be inside the namespace.
Is there a technical reason for putting the using directives inside instead of outside the namespace?
A: As Jeppe Stig Nielsen said, this thread already has great answers, but I thought this rather obvious subtlety was worth mentioning too.
using directives specified inside namespaces can make for shorter code since they don't need to be fully qualified as when they're specified on the outside.
The following example works because the types Foo and Bar are both in the same global namespace, Outer.
Presume the code file Foo.cs:
namespace Outer.Inner
{
class Foo { }
}
And Bar.cs:
namespace Outer
{
using Outer.Inner;
class Bar
{
public Foo foo;
}
}
That may omit the outer namespace in the using directive, for short:
namespace Outer
{
using Inner;
class Bar
{
public Foo foo;
}
}
A: According to Hanselman - Using Directive and Assembly Loading... and other such articles there is technically no difference.
My preference is to put them outside of namespaces.
A: According the to StyleCop Documentation:
SA1200: UsingDirectivesMustBePlacedWithinNamespace
Cause
A C# using directive is placed outside of a namespace element.
Rule Description
A violation of this rule occurs when a using directive or a using-alias directive is placed outside of a namespace element, unless the file does not contain any namespace elements.
For example, the following code would result in two violations of this rule.
using System;
using Guid = System.Guid;
namespace Microsoft.Sample
{
public class Program
{
}
}
The following code, however, would not result in any violations of this rule:
namespace Microsoft.Sample
{
using System;
using Guid = System.Guid;
public class Program
{
}
}
This code will compile cleanly, without any compiler errors. However, it is unclear which version of the Guid type is being allocated. If the using directive is moved inside of the namespace, as shown below, a compiler error will occur:
namespace Microsoft.Sample
{
using Guid = System.Guid;
public class Guid
{
public Guid(string s)
{
}
}
public class Program
{
public static void Main(string[] args)
{
Guid g = new Guid("hello");
}
}
}
The code fails on the following compiler error, found on the line containing Guid g = new Guid("hello");
CS0576: Namespace 'Microsoft.Sample' contains a definition conflicting with alias 'Guid'
The code creates an alias to the System.Guid type called Guid, and also creates its own type called Guid with a matching constructor interface. Later, the code creates an instance of the type Guid. To create this instance, the compiler must choose between the two different definitions of Guid. When the using-alias directive is placed outside of the namespace element, the compiler will choose the local definition of Guid defined within the local namespace, and completely ignore the using-alias directive defined outside of the namespace. This, unfortunately, is not obvious when reading the code.
When the using-alias directive is positioned within the namespace, however, the compiler has to choose between two different, conflicting Guid types both defined within the same namespace. Both of these types provide a matching constructor. The compiler is unable to make a decision, so it flags the compiler error.
Placing the using-alias directive outside of the namespace is a bad practice because it can lead to confusion in situations such as this, where it is not obvious which version of the type is actually being used. This can potentially lead to a bug which might be difficult to diagnose.
Placing using-alias directives within the namespace element eliminates this as a source of bugs.
*Multiple Namespaces
Placing multiple namespace elements within a single file is generally a bad idea, but if and when this is done, it is a good idea to place all using directives within each of the namespace elements, rather than globally at the top of the file. This will scope the namespaces tightly, and will also help to avoid the kind of behavior described above.
It is important to note that when code has been written with using directives placed outside of the namespace, care should be taken when moving these directives within the namespace, to ensure that this is not changing the semantics of the code. As explained above, placing using-alias directives within the namespace element allows the compiler to choose between conflicting types in ways that will not happen when the directives are placed outside of the namespace.
How to Fix Violations
To fix a violation of this rule, move all using directives and using-alias directives within the namespace element.
A: The technical reasons are discussed in the answers and I think that it comes to the personal preferences in the end since the difference is not that big and there are tradeoffs for both of them. Visual Studio's default template for creating .cs files use using directives outside of namespaces e.g.
One can adjust stylecop to check using directives outside of namespaces through adding stylecop.json file in the root of the project file with the following:
{
"$schema": "https://raw.githubusercontent.com/DotNetAnalyzers/StyleCopAnalyzers/master/StyleCop.Analyzers/StyleCop.Analyzers/Settings/stylecop.schema.json",
"orderingRules": {
"usingDirectivesPlacement": "outsideNamespace"
}
}
}
You can create this config file in solution level and add it to your projects as 'Existing Link File' to share the config across all of your projects too.
A: This thread already has some great answers, but I feel I can bring a little more detail with this additional answer.
First, remember that a namespace declaration with periods, like:
namespace MyCorp.TheProduct.SomeModule.Utilities
{
...
}
is entirely equivalent to:
namespace MyCorp
{
namespace TheProduct
{
namespace SomeModule
{
namespace Utilities
{
...
}
}
}
}
If you wanted to, you could put using directives on all of these levels. (Of course, we want to have usings in only one place, but it would be legal according to the language.)
The rule for resolving which type is implied, can be loosely stated like this: First search the inner-most "scope" for a match, if nothing is found there go out one level to the next scope and search there, and so on, until a match is found. If at some level more than one match is found, if one of the types are from the current assembly, pick that one and issue a compiler warning. Otherwise, give up (compile-time error).
Now, let's be explicit about what this means in a concrete example with the two major conventions.
(1) With usings outside:
using System;
using System.Collections.Generic;
using System.Linq;
//using MyCorp.TheProduct; <-- uncommenting this would change nothing
using MyCorp.TheProduct.OtherModule;
using MyCorp.TheProduct.OtherModule.Integration;
using ThirdParty;
namespace MyCorp.TheProduct.SomeModule.Utilities
{
class C
{
Ambiguous a;
}
}
In the above case, to find out what type Ambiguous is, the search goes in this order:
*
*Nested types inside C (including inherited nested types)
*Types in the current namespace MyCorp.TheProduct.SomeModule.Utilities
*Types in namespace MyCorp.TheProduct.SomeModule
*Types in MyCorp.TheProduct
*Types in MyCorp
*Types in the null namespace (the global namespace)
*Types in System, System.Collections.Generic, System.Linq, MyCorp.TheProduct.OtherModule, MyCorp.TheProduct.OtherModule.Integration, and ThirdParty
The other convention:
(2) With usings inside:
namespace MyCorp.TheProduct.SomeModule.Utilities
{
using System;
using System.Collections.Generic;
using System.Linq;
using MyCorp.TheProduct; // MyCorp can be left out; this using is NOT redundant
using MyCorp.TheProduct.OtherModule; // MyCorp.TheProduct can be left out
using MyCorp.TheProduct.OtherModule.Integration; // MyCorp.TheProduct can be left out
using ThirdParty;
class C
{
Ambiguous a;
}
}
Now, search for the type Ambiguous goes in this order:
*
*Nested types inside C (including inherited nested types)
*Types in the current namespace MyCorp.TheProduct.SomeModule.Utilities
*Types in System, System.Collections.Generic, System.Linq, MyCorp.TheProduct, MyCorp.TheProduct.OtherModule, MyCorp.TheProduct.OtherModule.Integration, and ThirdParty
*Types in namespace MyCorp.TheProduct.SomeModule
*Types in MyCorp
*Types in the null namespace (the global namespace)
(Note that MyCorp.TheProduct was a part of "3." and was therefore not needed between "4." and "5.".)
Concluding remarks
No matter if you put the usings inside or outside the namespace declaration, there's always the possibility that someone later adds a new type with identical name to one of the namespaces which have higher priority.
Also, if a nested namespace has the same name as a type, it can cause problems.
It is always dangerous to move the usings from one location to another because the search hierarchy changes, and another type may be found. Therefore, choose one convention and stick to it, so that you won't have to ever move usings.
Visual Studio's templates, by default, put the usings outside of the namespace (for example if you make VS generate a new class in a new file).
One (tiny) advantage of having usings outside is that you can then utilize the using directives for a global attribute, for example [assembly: ComVisible(false)] instead of [assembly: System.Runtime.InteropServices.ComVisible(false)].
Update about file-scoped namespace declarations
Since C# 10.0 (from 2021), you can avoid indentation and use either (convention 1, usings outside):
using System;
using System.Collections.Generic;
using System.Linq;
using MyCorp.TheProduct.OtherModule;
using MyCorp.TheProduct.OtherModule.Integration;
using ThirdParty;
namespace MyCorp.TheProduct.SomeModule.Utilities;
class C
{
Ambiguous a;
}
or (convention 2, usings inside):
namespace MyCorp.TheProduct.SomeModule.Utilities;
using System;
using System.Collections.Generic;
using System.Linq;
using MyCorp.TheProduct;
using MyCorp.TheProduct.OtherModule;
using MyCorp.TheProduct.OtherModule.Integration;
using ThirdParty;
class C
{
Ambiguous a;
}
But the same considerations as before apply.
A: Not already mentioned:
Placing the using directives inside the namespace declaration is an application of the well-known best programming practice of declaring everything in the smallest scope possible.
If best programming practices are second nature to you, then you do things like that automatically.
This might be the best reason for putting your using directives inside the namespace declaration, regardless of (borderline) technical (borderline) merits mentioned elsewhere; It's as simple as that.
Already mentioned but perhaps better illustrated:
Placing using directives inside the namespace avoids unnecessary repetition and makes our declarations more terse.
This is unnecessarily terse:
using Com.Acme.Products.Traps.RoadRunnerTraps;
namespace Com.Acme.Products.Traps {
This is sweet and to the point:
namespace Com.Acme.Products.Traps {
using RoadRunnerTraps;
A: There is an issue with placing using statements inside the namespace when you wish to use aliases. The alias doesn't benefit from the earlier using statements and has to be fully qualified.
Consider:
namespace MyNamespace
{
using System;
using MyAlias = System.DateTime;
class MyClass
{
}
}
versus:
using System;
namespace MyNamespace
{
using MyAlias = DateTime;
class MyClass
{
}
}
This can be particularly pronounced if you have a long-winded alias such as the following (which is how I found the problem):
using MyAlias = Tuple<Expression<Func<DateTime, object>>, Expression<Func<TimeSpan, object>>>;
With using statements inside the namespace, it suddenly becomes:
using MyAlias = System.Tuple<System.Linq.Expressions.Expression<System.Func<System.DateTime, object>>, System.Linq.Expressions.Expression<System.Func<System.TimeSpan, object>>>;
Not pretty.
A: As a rule, external using directives (System and Microsoft namespaces for example) should be placed outside the namespace directive. They are defaults that should be applied in all cases unless otherwise specified. This should include any of your own organization's internal libraries that are not part of the current project, or using directives that reference other primary namespaces in the same project. Any using directives that reference other modules in the current project and namespace should be placed inside the namespace directive. This serves two specific functions:
*
*It provides a visual distinction between local modules and 'other' modules, meaning everything else.
*It scopes the local directives to be applied preferentially over global directives.
The latter reason is significant. It means that it's harder to introduce an ambiguous reference issue that can be introduced by a change no more significant than refactoring code. That is to say, you move a method from one file to another and suddenly a bug shows up that wasn't there before. Colloquially, a 'heisenbug' - historically fiendishly difficult to track down.
A: There is actually a (subtle) difference between the two. Imagine you have the following code in File1.cs:
// File1.cs
using System;
namespace Outer.Inner
{
class Foo
{
static void Bar()
{
double d = Math.PI;
}
}
}
Now imagine that someone adds another file (File2.cs) to the project that looks like this:
// File2.cs
namespace Outer
{
class Math
{
}
}
The compiler searches Outer before looking at those using directives outside the namespace, so it finds Outer.Math instead of System.Math. Unfortunately (or perhaps fortunately?), Outer.Math has no PI member, so File1 is now broken.
This changes if you put the using inside your namespace declaration, as follows:
// File1b.cs
namespace Outer.Inner
{
using System;
class Foo
{
static void Bar()
{
double d = Math.PI;
}
}
}
Now the compiler searches System before searching Outer, finds System.Math, and all is well.
Some would argue that Math might be a bad name for a user-defined class, since there's already one in System; the point here is just that there is a difference, and it affects the maintainability of your code.
It's also interesting to note what happens if Foo is in namespace Outer, rather than Outer.Inner. In that case, adding Outer.Math in File2 breaks File1 regardless of where the using goes. This implies that the compiler searches the innermost enclosing namespace before it looks at any using directive.
A: Putting it inside the namespaces makes the declarations local to that namespace for the file (in case you have multiple namespaces in the file) but if you only have one namespace per file then it doesn't make much of a difference whether they go outside or inside the namespace.
using ThisNamespace.IsImported.InAllNamespaces.Here;
namespace Namespace1
{
using ThisNamespace.IsImported.InNamespace1.AndNamespace2;
namespace Namespace2
{
using ThisNamespace.IsImported.InJustNamespace2;
}
}
namespace Namespace3
{
using ThisNamespace.IsImported.InJustNamespace3;
}
A: One wrinkle I ran into (that isn't covered in other answers):
Suppose you have these namespaces:
*
*Something.Other
*Parent.Something.Other
When you use using Something.Other outside of a namespace Parent, it refers to the first one (Something.Other).
However if you use it inside of that namespace declaration, it refers to the second one (Parent.Something.Other)!
There is a simple solution: add the "global::" prefix: docs
namespace Parent
{
using global::Something.Other;
// etc
}
A: Another subtlety that I don't believe has been covered by the other answers is for when you have a class and namespace with the same name.
When you have the import inside the namespace then it will find the class. If the import is outside the namespace then the import will be ignored and the class and namespace have to be fully qualified.
//file1.cs
namespace Foo
{
class Foo
{
}
}
//file2.cs
namespace ConsoleApp3
{
using Foo;
class Program
{
static void Main(string[] args)
{
//This will allow you to use the class
Foo test = new Foo();
}
}
}
//file3.cs
using Foo; //Unused and redundant
namespace Bar
{
class Bar
{
Bar()
{
Foo.Foo test = new Foo.Foo();
Foo test = new Foo(); //will give you an error that a namespace is being used like a class.
}
}
}
A: It is a better practice if those default using i.e. "references" used in your source solution should be outside the namespaces and those that are "new added reference" is a good practice is you should put it inside the namespace. This is to distinguish what references are being added.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125319",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2361"
}
|
Q: How to implement simple threading with a fixed number of worker threads I'm looking for the simplest, most straightforward way to implement the following:
*
*The main program instantiates worker
threads to do a task.
*Only n tasks can be running at once.
*When n is reached, no more workers
are started until the count of
running threads drops back below n.
A: Executors.newFixedThreadPool(int)
Executor executor = Executors.newFixedThreadPool(n);
Runnable runnable = new Runnable() {
public void run() {
// do your thing here
}
}
executor.execute(runnable);
A: I think that Executors.newFixedThreadPool fits your requirements. There are a number of different ways to use the resulting ExecutorService, depending on whether you want a result returned to the main thread, or whether the task is totally self-contained, and whether you have a collection of tasks to perform up front, or whether tasks are queued in response to some event.
Collection<YourTask> tasks = new ArrayList<YourTask>();
YourTask yt1 = new YourTask();
...
tasks.add(yt1);
...
ExecutorService exec = Executors.newFixedThreadPool(5);
List<Future<YourResultType>> results = exec.invokeAll(tasks);
Alternatively, if you have a new asynchronous task to perform in response to some event, you probably just want to use the ExecutorService's simple execute(Runnable) method.
A: /* Get an executor service that will run a maximum of 5 threads at a time: */
ExecutorService exec = Executors.newFixedThreadPool(5);
/* For all the 100 tasks to be done altogether... */
for (int i = 0; i < 100; i++) {
/* ...execute the task to run concurrently as a runnable: */
exec.execute(new Runnable() {
public void run() {
/* do the work to be done in its own thread */
System.out.println("Running in: " + Thread.currentThread());
}
});
}
/* Tell the executor that after these 100 steps above, we will be done: */
exec.shutdown();
try {
/* The tasks are now running concurrently. We wait until all work is done,
* with a timeout of 50 seconds: */
boolean b = exec.awaitTermination(50, TimeUnit.SECONDS);
/* If the execution timed out, false is returned: */
System.out.println("All done: " + b);
} catch (InterruptedException e) { e.printStackTrace(); }
A: Use the Executor framework; namely newFixedThreadPool(N)
A: *
*If your task queue is not going to be unbounded and tasks can complete in shorter time intervals, you can use Executors.newFixedThreadPool(n); as suggests by experts.
The only drawback in this solution is unbounded task queue size. You don't have control over it. The huge pile-up in task queue will degrade performance of application and may cause out of memory in some scenarios.
*If you want to use ExecutorService and enable work stealing mechanism where idle worker threads share the work load from busy worker threads by stealing tasks in task queue. It will return ForkJoinPool type of Executor Service.
public static ExecutorService newWorkStealingPool(int parallelism)
Creates a thread pool that maintains enough threads to support the given parallelism level, and may use multiple queues to reduce contention. The parallelism level corresponds to the maximum number of threads actively engaged in, or available to engage in, task processing. The actual number of threads may grow and shrink dynamically. A work-stealing pool makes no guarantees about the order in which submitted tasks are executed.
*I prefer ThreadPoolExecutor due to flexibility in APIs to control many paratmeters, which controls the flow task execution.
ThreadPoolExecutor(int corePoolSize,
int maximumPoolSize,
long keepAliveTime,
TimeUnit unit,
BlockingQueue<Runnable> workQueue,
ThreadFactory threadFactory,
RejectedExecutionHandler handler)
in your case, set both corePoolSize and maximumPoolSize as N. Here you can control task queue size, define your own custom thread factory and rejection handler policy.
Have a look at related SE question to control the pool size dynamically:
Dynamic Thread Pool
A: If you want to roll your own:
private static final int MAX_WORKERS = n;
private List<Worker> workers = new ArrayList<Worker>(MAX_WORKERS);
private boolean roomLeft() {
synchronized (workers) {
return (workers.size() < MAX_WORKERS);
}
}
private void addWorker() {
synchronized (workers) {
workers.add(new Worker(this));
}
}
public void removeWorker(Worker worker) {
synchronized (workers) {
workers.remove(worker);
}
}
public Example() {
while (true) {
if (roomLeft()) {
addWorker();
}
}
}
Where Worker is your class that extends Thread. Each worker will call this class's removeWorker method, passing itself in as a parameter, when it's finished doing it's thing.
With that said, the Executor framework looks a lot better.
Edit: Anyone care to explain why this is so bad, instead of just downmodding it?
A: As others here have mentioned, your best bet is to make a thread pool with the Executors class:
However, if you want to roll your own, this code should give you an idea how to proceed. Basically, just add every new thread to a thread group and make sure that you never have more than N active threads in the group:
Task[] tasks = getTasks(); // array of tasks to complete
ThreadGroup group = new ThreadGroup();
int i=0;
while( i<tasks.length || group.activeCount()>0 ) {
if( group.activeCount()<N && i<tasks.length ) {
new TaskThread(group, tasks[i]).start();
i++;
} else {
Thread.sleep(100);
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125333",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "50"
}
|
Q: How do you do Impersonation in .NET? Is there a simple out of the box way to impersonate a user in .NET?
So far I've been using this class from code project for all my impersonation requirements.
Is there a better way to do it by using .NET Framework?
I have a user credential set, (username, password, domain name) which represents the identity I need to impersonate.
A: Here's my vb.net port of Matt Johnson's answer. I added an enum for the logon types. LOGON32_LOGON_INTERACTIVE was the first enum value that worked for sql server. My connection string was just trusted. No user name / password in the connection string.
<PermissionSet(SecurityAction.Demand, Name:="FullTrust")> _
Public Class Impersonation
Implements IDisposable
Public Enum LogonTypes
''' <summary>
''' This logon type is intended for users who will be interactively using the computer, such as a user being logged on
''' by a terminal server, remote shell, or similar process.
''' This logon type has the additional expense of caching logon information for disconnected operations;
''' therefore, it is inappropriate for some client/server applications,
''' such as a mail server.
''' </summary>
LOGON32_LOGON_INTERACTIVE = 2
''' <summary>
''' This logon type is intended for high performance servers to authenticate plaintext passwords.
''' The LogonUser function does not cache credentials for this logon type.
''' </summary>
LOGON32_LOGON_NETWORK = 3
''' <summary>
''' This logon type is intended for batch servers, where processes may be executing on behalf of a user without
''' their direct intervention. This type is also for higher performance servers that process many plaintext
''' authentication attempts at a time, such as mail or Web servers.
''' The LogonUser function does not cache credentials for this logon type.
''' </summary>
LOGON32_LOGON_BATCH = 4
''' <summary>
''' Indicates a service-type logon. The account provided must have the service privilege enabled.
''' </summary>
LOGON32_LOGON_SERVICE = 5
''' <summary>
''' This logon type is for GINA DLLs that log on users who will be interactively using the computer.
''' This logon type can generate a unique audit record that shows when the workstation was unlocked.
''' </summary>
LOGON32_LOGON_UNLOCK = 7
''' <summary>
''' This logon type preserves the name and password in the authentication package, which allows the server to make
''' connections to other network servers while impersonating the client. A server can accept plaintext credentials
''' from a client, call LogonUser, verify that the user can access the system across the network, and still
''' communicate with other servers.
''' NOTE: Windows NT: This value is not supported.
''' </summary>
LOGON32_LOGON_NETWORK_CLEARTEXT = 8
''' <summary>
''' This logon type allows the caller to clone its current token and specify new credentials for outbound connections.
''' The new logon session has the same local identifier but uses different credentials for other network connections.
''' NOTE: This logon type is supported only by the LOGON32_PROVIDER_WINNT50 logon provider.
''' NOTE: Windows NT: This value is not supported.
''' </summary>
LOGON32_LOGON_NEW_CREDENTIALS = 9
End Enum
<DllImport("advapi32.dll", SetLastError:=True, CharSet:=CharSet.Unicode)> _
Private Shared Function LogonUser(lpszUsername As [String], lpszDomain As [String], lpszPassword As [String], dwLogonType As Integer, dwLogonProvider As Integer, ByRef phToken As SafeTokenHandle) As Boolean
End Function
Public Sub New(Domain As String, UserName As String, Password As String, Optional LogonType As LogonTypes = LogonTypes.LOGON32_LOGON_INTERACTIVE)
Dim ok = LogonUser(UserName, Domain, Password, LogonType, 0, _SafeTokenHandle)
If Not ok Then
Dim errorCode = Marshal.GetLastWin32Error()
Throw New ApplicationException(String.Format("Could not impersonate the elevated user. LogonUser returned error code {0}.", errorCode))
End If
WindowsImpersonationContext = WindowsIdentity.Impersonate(_SafeTokenHandle.DangerousGetHandle())
End Sub
Private ReadOnly _SafeTokenHandle As New SafeTokenHandle
Private ReadOnly WindowsImpersonationContext As WindowsImpersonationContext
Public Sub Dispose() Implements System.IDisposable.Dispose
Me.WindowsImpersonationContext.Dispose()
Me._SafeTokenHandle.Dispose()
End Sub
Public NotInheritable Class SafeTokenHandle
Inherits SafeHandleZeroOrMinusOneIsInvalid
<DllImport("kernel32.dll")> _
<ReliabilityContract(Consistency.WillNotCorruptState, Cer.Success)> _
<SuppressUnmanagedCodeSecurity()> _
Private Shared Function CloseHandle(handle As IntPtr) As <MarshalAs(UnmanagedType.Bool)> Boolean
End Function
Public Sub New()
MyBase.New(True)
End Sub
Protected Overrides Function ReleaseHandle() As Boolean
Return CloseHandle(handle)
End Function
End Class
End Class
You need to Use with a Using statement to contain some code to run impersonated.
A: Here is some good overview of .NET impersonation concepts.
*
*Michiel van Otegem: WindowsImpersonationContext made easy
*WindowsIdentity.Impersonate Method (check out the code samples)
Basically you will be leveraging these classes that are out of the box in the .NET framework:
*
*WindowsImpersonationContext
*WindowsIdentity
The code can often get lengthy though and that is why you see many examples like the one you reference that try to simplify the process.
A: View more detail from my previous answer
I have created an nuget package
Nuget
Code on Github
sample : you can use :
string login = "";
string domain = "";
string password = "";
using (UserImpersonation user = new UserImpersonation(login, domain, password))
{
if (user.ImpersonateValidUser())
{
File.WriteAllText("test.txt", "your text");
Console.WriteLine("File writed");
}
else
{
Console.WriteLine("User not connected");
}
}
View the full code :
using System;
using System.Runtime.InteropServices;
using System.Security.Principal;
/// <summary>
/// Object to change the user authticated
/// </summary>
public class UserImpersonation : IDisposable
{
/// <summary>
/// Logon method (check athetification) from advapi32.dll
/// </summary>
/// <param name="lpszUserName"></param>
/// <param name="lpszDomain"></param>
/// <param name="lpszPassword"></param>
/// <param name="dwLogonType"></param>
/// <param name="dwLogonProvider"></param>
/// <param name="phToken"></param>
/// <returns></returns>
[DllImport("advapi32.dll")]
private static extern bool LogonUser(String lpszUserName,
String lpszDomain,
String lpszPassword,
int dwLogonType,
int dwLogonProvider,
ref IntPtr phToken);
/// <summary>
/// Close
/// </summary>
/// <param name="handle"></param>
/// <returns></returns>
[DllImport("kernel32.dll", CharSet = CharSet.Auto)]
public static extern bool CloseHandle(IntPtr handle);
private WindowsImpersonationContext _windowsImpersonationContext;
private IntPtr _tokenHandle;
private string _userName;
private string _domain;
private string _passWord;
const int LOGON32_PROVIDER_DEFAULT = 0;
const int LOGON32_LOGON_INTERACTIVE = 2;
/// <summary>
/// Initialize a UserImpersonation
/// </summary>
/// <param name="userName"></param>
/// <param name="domain"></param>
/// <param name="passWord"></param>
public UserImpersonation(string userName, string domain, string passWord)
{
_userName = userName;
_domain = domain;
_passWord = passWord;
}
/// <summary>
/// Valiate the user inforamtion
/// </summary>
/// <returns></returns>
public bool ImpersonateValidUser()
{
bool returnValue = LogonUser(_userName, _domain, _passWord,
LOGON32_LOGON_INTERACTIVE, LOGON32_PROVIDER_DEFAULT,
ref _tokenHandle);
if (false == returnValue)
{
return false;
}
WindowsIdentity newId = new WindowsIdentity(_tokenHandle);
_windowsImpersonationContext = newId.Impersonate();
return true;
}
#region IDisposable Members
/// <summary>
/// Dispose the UserImpersonation connection
/// </summary>
public void Dispose()
{
if (_windowsImpersonationContext != null)
_windowsImpersonationContext.Undo();
if (_tokenHandle != IntPtr.Zero)
CloseHandle(_tokenHandle);
}
#endregion
}
A: I'm aware that I'm quite late for the party, but I consider that the library from Phillip Allan-Harding, it's the best one for this case and similar ones.
You only need a small piece of code like this one:
private const string LOGIN = "mamy";
private const string DOMAIN = "mongo";
private const string PASSWORD = "HelloMongo2017";
private void DBConnection()
{
using (Impersonator user = new Impersonator(LOGIN, DOMAIN, PASSWORD, LogonType.LOGON32_LOGON_NEW_CREDENTIALS, LogonProvider.LOGON32_PROVIDER_WINNT50))
{
}
}
And add his class:
.NET (C#) Impersonation with Network Credentials
My example can be used if you require the impersonated login to have network credentials, but it has more options.
A: "Impersonation" in the .NET space generally means running code under a specific user account. It is a somewhat separate concept than getting access to that user account via a username and password, although these two ideas pair together frequently.
Impersonation
The APIs for impersonation are provided in .NET via the System.Security.Principal namespace:
*
*Newer code should generally use WindowsIdentity.RunImpersonated, which accepts a handle to the token of the user account, and then either an Action or Func<T> for the code to execute.
WindowsIdentity.RunImpersonated(userHandle, () =>
{
// do whatever you want as this user.
});
or
var result = WindowsIdentity.RunImpersonated(userHandle, () =>
{
// do whatever you want as this user.
return result;
});
There's also WindowsIdentity.RunImpersonatedAsync for async tasks, available on .NET 5+, or older versions if you pull in the System.Security.Principal.Windows Nuget package.
await WindowsIdentity.RunImpersonatedAsync(userHandle, async () =>
{
// do whatever you want as this user.
});
or
var result = await WindowsIdentity.RunImpersonated(userHandle, async () =>
{
// do whatever you want as this user.
return result;
});
*Older code used the WindowsIdentity.Impersonate method to retrieve a WindowsImpersonationContext object. This object implements IDisposable, so generally should be called from a using block.
using (WindowsImpersonationContext context = WindowsIdentity.Impersonate(userHandle))
{
// do whatever you want as this user.
}
While this API still exists in .NET Framework, it should generally be avoided.
Accessing the User Account
The API for using a username and password to gain access to a user account in Windows is LogonUser - which is a Win32 native API. There is not currently a built-in managed .NET API for calling it.
[DllImport("advapi32.dll", SetLastError = true, CharSet = CharSet.Unicode)]
internal static extern bool LogonUser(String lpszUsername, String lpszDomain, String lpszPassword, int dwLogonType, int dwLogonProvider, out IntPtr phToken);
This is the basic call definition, however there is a lot more to consider to actually using it in production:
*
*Obtaining a handle with the "safe" access pattern.
*Closing the native handles appropriately
*Code access security (CAS) trust levels (in .NET Framework only)
*Passing SecureString when you can collect one safely via user keystrokes.
Instead of writing that code yourself, consider using my SimpleImpersonation library, which provides a managed wrapper around the LogonUser API to get a user handle:
using System.Security.Principal;
using Microsoft.Win32.SafeHandles;
using SimpleImpersonation;
var credentials = new UserCredentials(domain, username, password);
using SafeAccessTokenHandle userHandle = credentials.LogonUser(LogonType.Interactive); // or another LogonType
You can now use that userHandle with any of the methods mentioned in the first section above. This is the preferred API as of version 4.0.0 of the SimpleImpersonation library. See the project readme for more details.
Remote Computer Access
It's important to recognize that impersonation is a local machine concept. One cannot impersonate using a user that is only known to a remote machine. If you want to access resources on a remote computer, the local machine and the remote machine must be attached to the same domain, or there needs to be a trust relationship between the domains of the two machines. If either computer is domainless, you cannot use LogonUser or SimpleImpersonation to connect to that machine.
A: This is probably what you want:
using System.Security.Principal;
using(WindowsIdentity.GetCurrent().Impersonate())
{
//your code goes here
}
But I really need more details to help you out. You could do impersonation with a config file (if you're trying to do this on a website), or through method decorators (attributes) if it's a WCF service, or through... you get the idea.
Also, if we're talking about impersonating a client that called a particular service (or web app), you need to configure the client correctly so that it passes the appropriate tokens.
Finally, if what you really want do is Delegation, you also need to setup AD correctly so that users and machines are trusted for delegation.
Edit:
Take a look here to see how to impersonate a different user, and for further documentation.
A: You can use this solution. (Use nuget package)
The source code is available on : Github:
https://github.com/michelcedric/UserImpersonation
More detail
https://michelcedric.wordpress.com/2015/09/03/usurpation-didentite-dun-user-c-user-impersonation/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125341",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "151"
}
|
Q: Plugins wont work with vb.net express What won't plugins wont work with vb c# studio express?
A: The Express editions do not support Visual Studio Addins.
A: Microsoft doesn't want to allow users to replace all the Standard+ functionality in Visual Studio when using the Express editions, as that would be akin to starving themselves. So, the decision was made that plugins would not be supported in the Express editions.
If there are plugins you can't live without, buying a license for the Standard edition isn't too much money, and they almost always give away free copies of standard at launch events for new versions of Visual Studio.
A: There are ways to make addins in the express editions, but it's not worth Microsoft wrath. That's what happened with TestDriven.NET. Why don't you just get the professional edition of VB.NET? If cost is an issue, you could try having one of your buddies still at the Uni, buy an academic version.
You could also try using SharpDevelop. It comes with a lot of useful addins.
A: Students qualify for free Microsoft software, which include Visual Studio Pro.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125352",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Any clever ways of handling the context in a web app? In Java, web apps are bundled in to WARs. By default, many servlet containers will use the WAR name as the context name for the application.
Thus myapp.war gets deployed to http://example.com/myapp.
The problem is that the webapp considers its "root" to be, well, "root", or simply "/", whereas HTML would consider the root of your application to be "/myapp".
The Servlet API and JSP have facilities to help manage this. For example, if, in a servlet, you do: response.sendRedirect("/mypage.jsp"), the container will prepend the context and create the url: http://example.com/myapp/mypage.jsp".
However, you can't do that with, say, the IMG tag in HTML. If you do <img src="/myimage.gif"/> you will likely get a 404, because what you really wanted was "/myapp/myimage.gif".
Many frameworks have JSP tags that are context aware as well, and there are different ways of making correct URLs within JSP (none particularly elegantly).
It's a nitty problem for coders to jump in an out of when to use an "App Relative" url, vs an absolute url.
Finally, there's the issue of Javascript code that needs to create URLs on the fly, and embedded URLs within CSS (for background images and the like).
I'm curious what techniques others use to mitigate and work around this issue. Many simply punt and hard code it, either to server root or to whatever context they happen to be using. I already know that answer, that's not what I'm looking for.
What do you do?
A: For HTML pages, I just set the HTML <base> tag. Every relative link (i.e. not starting with scheme or /) will become relative to it. There is no clean way to grab it immediately by HttpServletRequest, so we need little help of JSTL here.
<%@taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %>
<%@taglib prefix="fn" uri="http://java.sun.com/jsp/jstl/functions" %>
<c:set var="req" value="${pageContext.request}" />
<c:set var="url">${req.requestURL}</c:set>
<c:set var="uri">${req.requestURI}</c:set>
<!DOCTYPE html>
<html lang="en">
<head>
<base href="${fn:substring(url, 0, fn:length(url) - fn:length(uri))}${req.contextPath}/" />
<link rel="stylesheet" href="css/default.css">
<script src="js/default.js"></script>
</head>
<body>
<img src="img/logo.png" />
<a href="other.jsp">link</a>
</body>
</html>
This has in turn however a caveat: anchors (the #identifier URL's) will become relative to the base path as well. If you have any of them, you would like to make it relative to the request URL (URI) instead. So, change like
<a href="#identifier">jump</a>
to
<a href="${uri}#identifier">jump</a>
In JS, you can just access the <base> element from the DOM whenever you'd like to convert a relative URL to an absolute URL.
var base = document.getElementsByTagName("base")[0].href;
Or if you do jQuery
var base = $("base").attr("href");
In CSS, the image URLs are relative to the URL of the stylesheet itself. So, just drop the images in some folder relative to the stylesheet itself. E.g.
/css/style.css
/css/images/foo.png
and reference them as follows
background-image: url('images/foo.png');
If you rather like to drop the images in some folder at same level as CSS folder
/css/style.css
/images/foo.png
then use ../ to go to the common parent folder
background-image: url('../images/foo.png');
See also:
*
*Is it recommended to use the <base> html tag?
A: I agree with tardate.
I have paid my attention at filters too and have found the solution in face of project UrlRewriteFilter.
The simple configuration like following:
<rule>
<from>^.+/resources/(.*)$</from>
<to>/resources/$1</to>
</rule>
helps to forward all requests for */resources path to /resources pass (including context path prefix). So I can simply put all my images and CSS files under resources folder and continue using relative URLs in my styles for background images and other cases.
A:
The Servlet API and JSP have
facilities to help manage this. For
example, if, in a servlet, you do:
response.sendRedirect("/mypage.jsp"),
the container will prepend the context
and create the url:
http://example.com/myapp/mypage.jsp".
Ah, maybe, maybe not - it depends on your container and the servlet spec!
From Servlet 2.3: New features exposed:
And finally, after a lengthy debate by
a group of experts, Servlet API 2.3
has clarified once and for all exactly
what happens on a
res.sendRedirect("/index.html") call
for a servlet executing within a
non-root context. The issue is that
Servlet API 2.2 requires an incomplete
path like "/index.html" to be
translated by the servlet container
into a complete path, but doesn't say
how context paths are handled. If the
servlet making the call is in a
context at the path "/contextpath,"
should the redirect URI translate
relative to the container root
(http://server:port/index.html) or the
context root
(http://server:port/contextpath/index.html)?
For maximum portability, it's
imperative to define the behavior;
after lengthy debate, the experts
chose to translate relative to the
container root. For those who want
context relative, you can prepend the
output from getContextPath() to your
URI.
So no, with 2.3 your paths are not automatically translated to include the context path.
A: You can use JSTL for creating urls.
For example, <c:url value="/images/header.jpg" /> will prefix the context root.
With CSS, this usually isn't an issue for me.
I have a web root structure like this:
/css
/images
In the CSS file, you then just need to use relative URLs (../images/header.jpg) and it doesn't need to be aware of the context root.
As for JavaScript, what works for me is including some common JavaScript in the page header like this:
<script type="text/javascript">
var CONTEXT_ROOT = '<%= request.getContextPath() %>';
</script>
Then you can use the context root in all your scripts (or, you can define a function to build paths - may be a bit more flexible).
Obviously this all depends on your using JSPs and JSTL, but I use JSF with Facelets and the techniques involved are similar - the only real difference is getting the context root in a different way.
A: I've used helper classes to generate img tags etc. This helper class takes care of prefixing paths with the application's contextPath. (This works, but I don't really like it. If anyone has any better alternatives, please tell.)
For paths in css files etc. I use an Ant build script that uses a site.production.css for site.css in production environment and site.development.css in development encironment.
Alternatively I sometimes use an Ant script that replaces @token@ tokens with proper data for different environents. In this case @contextPAth@ token would be replaced with the correct context path.
A: One option is to use "flat" application structure and relative URLs whenever possible.
By "flat" I mean that there are no subdirectories under your application root, maybe just few directories for static content as "images/". All your JSP's, action URLs, servlets go directly under the root.
This doesn't completely solve your problem but simplifies it greatly.
A: Vilmantas said the right word here: relative URLs.
All you need to do in your IMG is to use
<img src="myimage.gif"/>
instead of
<img src="/myimage.gif"/>
and it'll be relative to the app context (as the browser is interpreting the URL to go to)
A: Except for special cases, I'd recommend against using absolute URLs this way. Ever. Absolute URLs are good for when another webapp is pointing at something in your webapp. Internally -- when one resource is pointing at a second resource in the same context -- the resource should know where it lives, so it should be able to express a relative path to the second resource.
Of course, you'll write modular components, which don't know the resource that's including them. For example:
/myapp/user/email.jsp:
Email: <a href="../sendmail.jsp">${user.email}</a>
/myapp/browse/profile.jsp:
<jsp:include page="../user/email.jsp" />
/myapp/home.jsp:
<jsp:include page="../user/email.jsp" />
So, how does email.jsp know the relative path of sendmail.jsp? Clearly the link will break on either /myapp/browse/profile.jsp or it will break on /myapp/home.jsp . The answer is, keep all your URLs in the same flat filepath space. That is, every URL should have no slashes after /myapp/ .
This is pretty easy to accomplish, as long as you have some kind of mapping between URLs and the actual files that generate the content. (e.g. in Spring, use DispatcherServlet to map URLs to JSP files or to views.)
There are special cases. e.g. if you're writing a browser-side application in Javascript, then it gets harder to maintain a flat filepath space. In that case, or in other special cases, or just if you have a personal preference, it's not really a big deal to use <%= request.getContextPath() %> to create an absolute path.
A: You can use request.getContextPath() to build absolute URLs that aren't hard-coded to a specific context. As an earlier answer indicated, for JavaScript you just set a variable at the top of your JSP (or preferably in a template) and prefix that as the context.
That doesn't work for CSS image replacement unless you want to dynamically generate a CSS file, which can cause other issues. But since you know where your CSS file is in relation to your images, you can get away with relative URLs.
For some reason, I've had trouble with IE handling relative URLs and had to fall back to using expressions with a JavaScript variable set to the context. I just split my IE image replacements off into their own file and used IE macros to pull in the correct ones. It wasn't a big deal because I already had to do that to deal with transparent PNGs anyway. It's not pretty, but it works.
A: I've used most of these techniques (save the XSLT architecture).
I think the crux (and consensus) of the problem is having a site with potentially multiple directories.
If your directory depth (for lack of a better term) is constant, then you can rely on relative urls in things like CSS.
Mind, the layout doesn't have to be completely flat, just consistent.
For example, we've done hierarchies like /css, /js, /common, /admin, /user. Putting appropriate pages and resources in the proper directories. Having a structure like this works very well with Container based authentication.
I've also mapped *.css and *.js to the JSP servlet, and made them dynamic so I can build them on the fly.
I was just hoping there was something else I may have missed.
A: I by no means claim that the following is an elegant issue. In fact, in hindsight, I wouldn't recommend this issue given the (most likely) performance hit.
Our web app's JSPs were strictly XML raw data. This raw data was then sent into an XSL (server-side) which applied the right CSS tags, and spit out the XHTML.
We had a single template.xsl which would be inherited by the multiple XSL files that we had for different components of the website. Our paths were all defined in an XSL file called paths.xml:
<?xml version="1.0" encoding="UTF-8"?>
<paths>
<path name="account" parent="home">Account/</path>
<path name="css">css/</path>
<path name="home">servlet/</path>
<path name="icons" parent="images">icons/</path>
<path name="images">images/</path>
<path name="js">js/</path>
</paths>
An internal link would be in the XML as follows:
<ilink name="link to icons" type="icons">link to icons</ilink>
This would get processed by our XSL:
<xsl:template match="ilink">
<xsl:variable name="temp">
<xsl:value-of select="$rootpath" />
<xsl:call-template name="paths">
<xsl:with-param name="path-name"><xsl:value-of select="@type" /></xsl:with-param>
</xsl:call-template>
<xsl:value-of select="@file" />
</xsl:variable>
<a href="{$temp}" title="{@name}" ><xsl:value-of select="." /></a>
</xsl:template>
$rootPath was passed onto each file with ${applicationScope.contextPath} The idea behind us using XML instead of just hard-coding it in a JSP/Java file was we didn't want to have to recompile.
Again, the solution isn't a good one at all...but we did use it once!
Edit: Actually, the complexity in our issue arose because we weren't able to use JSPs for our entire view. Why wouldn't someone just use ${applicationScope.contextPath} to retrieve the context path? It worked fine for us then.
A: When creating a site from scratch, I side with @Will - aim for a consistent and predictable url structure so that you can stick with relative references.
But things can get really messy if you are updating a site that was originally built to work directly under the site root "/" (pretty common for simple JSP sites) to formal Java EE packaging (where context root will be some path under the root).
That can mean a lot of code changes.
If you want to avoid or postpone the code changes, but still ensure correct context root referencing, a technique I've tested is to use servlet filters. The filter can be dropped into an existing proejct without changing anything (except web.xml), and will remap any url references in the outbound HTML to the correct path, and also ensure redirects are correctly referenced.
An example site and usable code available here: EnforceContextRootFilter-1.0-src.zip
NB: the actual mapping rules are implemented as regex in the servlet class and provide a pretty general catch-all - but you may need to modify for particular circumstances.
btw, I forked a slightly different question to address migrating existing code base from "/" to a non-root context-path
A: I tend to write a property as part of my core JavaScript library within my. I don't think it's perfect but I think it's the best I've managed to achieve.
First off, I have a module that's part of my app core that is always available
(function (APP) {
var ctx;
APP.setContext = function (val) {
// protect rogue JS from setting the context.
if (ctx) {
return;
}
val = val || val.trim();
// Don't allow a double slash for a context.
if (val.charAt(0) === '/' && val.charAt(1) === '/') {
return;
}
// Context must both start and end in /.
if (val.length === 0 || val === '/') {
val = '/';
} else {
if (val.charAt(0) !== '/') {
val = '/' + val;
}
if (val.slice(-1) !== '/') {
val += '/';
}
}
ctx = val;
};
APP.getContext = function () {
return ctx || '/';
};
APP.getUrl = function (val) {
if (val && val.length > 0) {
return APP.getContext() + (val.charAt(0) === '/' ? val.substring(1) : val);
}
return APP.getContext();
};
})(window.APP = window.APP || {});
I then use apache tiles with a common header that always contains the following:
<script type="text/javascript">
APP.setContext('${pageContext.request['contextPath']}');
// If preferred use JSTL cor, but it must be available and declared.
//APP.setContext('<c:url value='/'/>');
</script>
Now that I've initialized the context I may use getUrl(path) from anywhere (js files or within jsp/html) which will return an absolute path for the given input string within the context.
Note that the following are both equivalent intentionally. getUrl will always return an absolute path as a relative path does not need you to know the context in the first place.
var path = APP.getUrl("/some/path");
var path2 = APP.getUrl("some/path");
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "39"
}
|
Q: Dynamic type languages versus static type languages What are the advantages and limitations of dynamic type languages compared to static type languages?
See also: whats with the love of dynamic languages (a far more argumentative thread...)
A: Well, both are very, very very very misunderstood and also two completely different things. that aren't mutually exclusive.
Static types are a restriction of the grammar of the language. Statically typed languages strictly could be said to not be context free. The simple truth is that it becomes inconvenient to express a language sanely in context free grammars that doesn't treat all its data simply as bit vectors. Static type systems are part of the grammar of the language if any, they simply restrict it more than a context free grammar could, grammatical checks thus happen in two passes over the source really. Static types correspond to the mathematical notion of type theory, type theory in mathematics simply restricts the legality of some expressions. Like, I can't say 3 + [4,7] in maths, this is because of the type theory of it.
Static types are thus not a way to 'prevent errors' from a theoretical perspective, they are a limitation of the grammar. Indeed, provided that +, 3 and intervals have the usual set theoretical definitions, if we remove the type system 3 + [4,7]has a pretty well defined result that's a set. 'runtime type errors' theoretically do not exist, the type system's practical use is to prevent operations that to human beings would make no sense. Operations are still just the shifting and manipulation of bits of course.
The catch to this is that a type system can't decide if such operations are going to occur or not if it would be allowed to run. As in, exactly partition the set of all possible programs in those that are going to have a 'type error', and those that aren't. It can do only two things:
1: prove that type errors are going to occur in a program
2: prove that they aren't going to occur in a program
This might seem like I'm contradicting myself. But what a C or Java type checker does is it rejects a program as 'ungrammatical', or as it calls it 'type error' if it can't succeed at 2. It can't prove they aren't going to occur, that doesn't mean that they aren't going to occur, it just means it can't prove it. It might very well be that a program which will not have a type error is rejected simply because it can't be proven by the compiler. A simple example being if(1) a = 3; else a = "string";, surely since it's always true, the else-branch will never be executed in the program, and no type error shall occur. But it can't prove these cases in a general way, so it's rejected. This is the major weakness of a lot of statically typed languages, in protecting you against yourself, you're necessarily also protected in cases you don't need it.
But, contrary to popular believe, there are also statically typed languages that work by principle 1. They simply reject all programs of which they can prove it's going to cause a type error, and pass all programs of which they can't. So it's possible they allow programs which have type errors in them, a good example being Typed Racket, it's hybrid between dynamic and static typing. And some would argue that you get the best of both worlds in this system.
Another advantage of static typing is that types are known at compile time, and thus the compiler can use this. If we in Java do "string" + "string" or 3 + 3, both + tokens in text in the end represent a completely different operation and datum, the compiler knows which to choose from the types alone.
Now, I'm going to make a very controversial statement here but bear with me: 'dynamic typing' does not exist.
Sounds very controversial, but it's true, dynamically typed languages are from a theoretical perspective untyped. They are just statically typed languages with only one type. Or simply put, they are languages that are indeed grammatically generated by a context free grammar in practice.
Why don't they have types? Because every operation is defined and allowed on every operant, what's a 'runtime type error' exactly? It's from a theoretical example purely a side-effect. If doing print("string") which prints a string is an operation, then so is length(3), the former has the side effect of writing string to the standard output, the latter simply error: function 'length' expects array as argument., that's it. There is from a theoretical perspective no such thing as a dynamically typed language. They are untyped
All right, the obvious advantage of 'dynamically typed' language is expressive power, a type system is nothing but a limitation of expressive power. And in general, languages with a type system indeed would have a defined result for all those operations that are not allowed if the type system was just ignored, the results would just not make sense to humans. Many languages lose their Turing completeness after applying a type system.
The obvious disadvantage is the fact that operations can occur which would produce results which are nonsensical to humans. To guard against this, dynamically typed languages typically redefine those operations, rather than producing that nonsensical result they redefine it to having the side effect of writing out an error, and possibly halting the program altogether. This is not an 'error' at all, in fact, the language specification usually implies this, this is as much behaviour of the language as printing a string from a theoretical perspective. Type systems thus force the programmer to reason about the flow of the code to make sure that this doesn't happen. Or indeed, reason so that it does happen can also be handy in some points for debugging, showing that it's not an 'error' at all but a well defined property of the language. In effect, the single remnant of 'dynamic typing' that most languages have is guarding against a division by zero. This is what dynamic typing is, there are no types, there are no more types than that zero is a different type than all the other numbers. What people call a 'type' is just another property of a datum, like the length of an array, or the first character of a string. And many dynamically typed languages also allow you to write out things like "error: the first character of this string should be a 'z'".
Another thing is that dynamically typed languages have the type available at runtime and usually can check it and deal with it and decide from it. Of course, in theory it's no different than accessing the first char of an array and seeing what it is. In fact, you can make your own dynamic C, just use only one type like long long int and use the first 8 bits of it to store your 'type' in and write functions accordingly that check for it and perform float or integer addition. You have a statically typed language with one type, or a dynamic language.
In practise this all shows, statically typed languages are generally used in the context of writing commercial software, whereas dynamically typed languages tend to be used in the context of solving some problems and automating some tasks. Writing code in statically typed languages simply takes long and is cumbersome because you can't do things which you know are going to turn out okay but the type system still protects you against yourself for errors you don't make. Many coders don't even realize that they do this because it's in their system but when you code in static languages, you often work around the fact that the type system won't let you do things that can't go wrong, because it can't prove it won't go wrong.
As I noted, 'statically typed' in general means case 2, guilty until proven innocent. But some languages, which do not derive their type system from type theory at all use rule 1: Innocent until proven guilty, which might be the ideal hybrid. So, maybe Typed Racket is for you.
Also, well, for a more absurd and extreme example, I'm currently implementing a language where 'types' are truly the first character of an array, they are data, data of the 'type', 'type', which is itself a type and datum, the only datum which has itself as a type. Types are not finite or bounded statically but new types may be generated based on runtime information.
A: It depends on context. There a lot benefits that are appropriate to dynamic typed system as well as for strong typed. I'm of opinion that the flow of dynamic types language is faster. The dynamic languages are not constrained with class attributes and compiler thinking of what is going on in code. You have some kinda freedom. Furthermore, the dynamic language usually is more expressive and result in less code which is good. Despite of this, it's more error prone which is also questionable and depends more on unit test covering. It's easy prototype with dynamic lang but maintenance may become nightmare.
The main gain over static typed system is IDE support and surely static analyzer of code.
You become more confident of code after every code change. The maintenance is peace of cake with such tools.
A: Perhaps the single biggest "benefit" of dynamic typing is the shallower learning curve. There is no type system to learn and no non-trivial syntax for corner cases such as type constraints. That makes dynamic typing accessible to a lot more people and feasible for many people for whom sophisticated static type systems are out of reach. Consequently, dynamic typing has caught on in the contexts of education (e.g. Scheme/Python at MIT) and domain-specific languages for non-programmers (e.g. Mathematica). Dynamic languages have also caught on in niches where they have little or no competition (e.g. Javascript).
The most concise dynamically-typed languages (e.g. Perl, APL, J, K, Mathematica) are domain specific and can be significantly more concise than the most concise general-purpose statically-typed languages (e.g. OCaml) in the niches they were designed for.
The main disadvantages of dynamic typing are:
*
*Run-time type errors.
*Can be very difficult or even practically impossible to achieve the same level of correctness and requires vastly more testing.
*No compiler-verified documentation.
*Poor performance (usually at run-time but sometimes at compile time instead, e.g. Stalin Scheme) and unpredictable performance due to dependence upon sophisticated optimizations.
Personally, I grew up on dynamic languages but wouldn't touch them with a 40' pole as a professional unless there were no other viable options.
A: The ability of the interpreter to deduce type and type conversions makes development time faster, but it also can provoke runtime failures which you just cannot get in a statically typed language where you catch them at compile time. But which one's better (or even if that's always true) is hotly discussed in the community these days (and since a long time).
A good take on the issue is from Static Typing Where Possible, Dynamic Typing When Needed: The End of the Cold War Between Programming Languages by Erik Meijer and Peter Drayton at Microsoft:
Advocates of static typing argue that
the advantages of static typing
include earlier detection of
programming mistakes (e.g. preventing
adding an integer to a boolean),
better documentation in the form of
type signatures (e.g. incorporating
number and types of arguments when
resolving names), more opportunities
for compiler optimizations (e.g.
replacing virtual calls by direct
calls when the exact type of the
receiver is known statically),
increased runtime efficiency (e.g. not
all values need to carry a dynamic
type), and a better design time
developer experience (e.g. knowing the
type of the receiver, the IDE can
present a drop-down menu of all
applicable members). Static typing
fanatics try to make us believe that
“well-typed programs cannot go wrong”.
While this certainly sounds
impressive, it is a rather vacuous
statement. Static type checking is a
compile-time abstraction of the
runtime behavior of your program, and
hence it is necessarily only partially
sound and incomplete. This means that
programs can still go wrong because of
properties that are not tracked by the
type-checker, and that there are
programs that while they cannot go
wrong cannot be type-checked. The
impulse for making static typing less
partial and more complete causes type
systems to become overly complicated
and exotic as witnessed by concepts
such as “phantom types” [11] and
“wobbly types” [10]. This is like
trying to run a marathon with a ball
and chain tied to your leg and
triumphantly shouting that you nearly
made it even though you bailed out
after the first mile.
Advocates of dynamically typed
languages argue that static typing is
too rigid, and that the softness of
dynamically languages makes them
ideally suited for prototyping systems
with changing or unknown requirements,
or that interact with other systems
that change unpredictably (data and
application integration). Of course,
dynamically typed languages are
indispensable for dealing with truly
dynamic program behavior such as
method interception, dynamic loading,
mobile code, runtime reflection, etc.
In the mother of all papers on
scripting [16], John Ousterhout argues
that statically typed systems
programming languages make code less
reusable, more verbose, not more safe,
and less expressive than dynamically
typed scripting languages. This
argument is parroted literally by many
proponents of dynamically typed
scripting languages. We argue that
this is a fallacy and falls into the
same category as arguing that the
essence of declarative programming is
eliminating assignment. Or as John
Hughes says [8], it is a logical
impossibility to make a language more
powerful by omitting features.
Defending the fact that delaying all
type-checking to runtime is a good
thing, is playing ostrich tactics with
the fact that errors should be caught
as early in the development process as
possible.
A: From Artima's Typing: Strong vs. Weak, Static vs. Dynamic article:
strong typing prevents mixing operations between mismatched types. In order to mix types, you must use an explicit conversion
weak typing means that you can mix types without an explicit conversion
In the Pascal Costanza's paper, Dynamic vs. Static Typing — A Pattern-Based Analysis (PDF), he claims that in some cases, static typing is more error-prone than dynamic typing. Some statically typed languages force you to manually emulate dynamic typing in order to do "The Right Thing". It's discussed at Lambda the Ultimate.
A: Static type systems seek to eliminate certain errors statically, inspecting the program without running it and attempting to prove soundness in certain respects. Some type systems are able to catch more errors than others. For example, C# can eliminate null pointer exceptions when used properly, whereas Java has no such power. Twelf has a type system which actually guarantees that proofs will terminate, "solving" the halting problem.
However, no type system is perfect. In order to eliminate a particular class of errors, they must also reject certain perfectly valid programs which violate the rules. This is why Twelf doesn't really solve the halting problem, it just avoids it by throwing out a large number of perfectly valid proofs which happen to terminate in odd ways. Likewise, Java's type system rejects Clojure's PersistentVector implementation due to its use of heterogeneous arrays. It works at runtime, but the type system cannot verify it.
For that reason, most type systems provide "escapes", ways to override the static checker. For most languages, these take the form of casting, though some (like C# and Haskell) have entire modes which are marked as "unsafe".
Subjectively, I like static typing. Implemented properly (hint: not Java), a static type system can be a huge help in weeding out errors before they crash the production system. Dynamically typed languages tend to require more unit testing, which is tedious at the best of times. Also, statically typed languages can have certain features which are either impossible or unsafe in dynamic type systems (implicit conversions spring to mind). It's all a question of requirements and subjective taste. I would no more build the next Eclipse in Ruby than I would attempt to write a backup script in Assembly or patch a kernel using Java.
Oh, and people who say that "x typing is 10 times more productive than y typing" are simply blowing smoke. Dynamic typing may "feel" faster in many cases, but it loses ground once you actually try to make your fancy application run. Likewise, static typing may seem like it's the perfect safety net, but one look at some of the more complicated generic type definitions in Java sends most developers scurrying for eye blinders. Even with type systems and productivity, there is no silver bullet.
Final note: don't worry about performance when comparing static with dynamic typing. Modern JITs like V8 and TraceMonkey are coming dangerously-close to static language performance. Also, the fact that Java actually compiles down to an inherently dynamic intermediate language should be a hint that for most cases, dynamic typing isn't the huge performance-killer that some people make it out to be.
A: There are lots of different things about static and dynamic languages. For me, the main difference is that in dynamic languages the variables don't have fixed types; instead, the types are tied to values. Because of this, the exact code that gets executed is undetermined until runtime.
In early or naïve implementations this is a huge performance drag, but modern JITs get tantalizingly close to the best you can get with optimizing static compilers. (in some fringe cases, even better than that).
A: It is all about the right tool for the job. Neither is better 100% of the time. Both systems were created by man and have flaws. Sorry, but we suck and making perfect stuff.
I like dynamic typing because it gets out of my way, but yes runtime errors can creep up that I didn't plan for.
Where as static typing may fix the aforementioned errors, but drive a novice(in typed languages) programmer crazy trying to cast between a constant char and a string.
A: Static Typing:
The languages such as Java and Scala are static typed.
The variables have to be defined and initialized before they are used in a code.
for ex.
int x;
x = 10;
System.out.println(x);
Dynamic Typing:
Perl is an dynamic typed language.
Variables need not be initialized before they are used in code.
y=10; use this variable in the later part of code
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125367",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "207"
}
|
Q: Adding a web reference to a DLL which is GACed I come across this problem when i am writing an event handler in SharePoint. My event handler has a web reference. When i create this web reference, the URL of the web service will be added in the .config file of the assembly. If i have to change the web reference URL i just have to change the link in the config file.
Problem comes when I try to GAC the dll. When i GAC the DLL, the config file cannot be GACed along with the dll, and hence, there is no way for me to update the web reference.
One workaround i have found is to modify the constructor method Reference.cs class which is autogenerated by visual studio when i add a reference, so that the constructor reads the web service url from some other location, say a registry or an XML file in some predetermined location. But this poses a problem sometimes, as when i update the web referenc using visual studio, this Reference.cs file gets regenerated, and all my modifications would be lost.
Is there a better way to solve this problem?
A: Any application hosted by SharePoint is using the web.config located at the root of your SharePoint web site in IIS. What you need to do is add the configuration generated by the Web/Service Reference wizard to your web.config.
This is roughly how it works:
*
*SharePoint application pool loads your DLL
*Your DLL looks for the service information in the current application configuration file
*Your DLL finds web.config and looks for configuration information there
Basically, the app.config that is being generated in your DLL is not used. As the application in this case is the Application Pool (w3wp.exe) that is hosting the SharePoint application. For SharePoint the app.config is actually named web.config and exists at the root of the SharePoint website.
A: If you have Visual Studio 2008, use a Service Reference instead of a Web Reference, which will generate partial classes that you can use to override functionality without your code overwritten by the generator.
For Visual Studio 2005, you could just add the partial keyword to the class in Reference.cs and keep a separate file with your own partial class:
public partial class WebServiceReference
{ public WebServiceReference(ExampleConfigurationClass config)
{ /* ... */
}
}
WebServiceReference svc = new WebServiceReference(myConfig);
A: I resolved this by making the web reference dynamic for my class library and then copying the applicationSettings config section containing the web reference from the app.config file into my Sharepoint site web.config.
Note you will also need to copy the entry for applicationSettings into your web.config as this is not in there normally.
A: You could try this: Rather than using the dynamic web reference make it a static reference so that the code in Reference.cs won't go looking for a value in the .config file for the url. Then sub-class the generated web service client code and in that derived class, add your own logic to set the .Url property. Then VS.NET can re-gen Reference.cs all it likes, and your url setting code will remain. Of course, you have to update any downstream code to use your derived class, but that should be a simple global replace.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125369",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Is it possible to define a Java ClassLoader that returns completely different classes to the one's requested? I've tried this, but get a ClassNotFoundException when calling:
Class.forName("com.AClass", false, mySpecialLoader)
A: The ClassLoader will have to call defineClass to get the Class. According to the JavaDoc for defineClass:
If name is not null, it must be equal
to the binary name of the class
specified by the byte array.
If the name is null, it will get it from the bytecode. So you can return any class you want as long as it's called com.AClass. In other words, you could have multiple versions of com.AClass. You could even use something like JavaAssist to create a class on the fly.
But that doesn't explain the ClassNotFoundException - it sounds like your class loader isn't returning anything.
A: It is impossible to return a class named differently than the one requested. However it is possible to use bytecode manipulation tools like ASM to automatically rename the class you want to return to the one requested.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125377",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Best way to access the runtime configuration of a maven plugin from a custom mojo? I am writing a custom maven2 MOJO. I need to access the runtime configuration of another plugin, from this MOJO.
What is the best way to do this?
A: You can get a list of plugins currently been used in the build using the following steps:
First you need to get Maven to inject the current project into your mojo, you use the class variable defined below to get this.
/**
* The maven project.
*
* @parameter expression="${project}"
* @readonly
*/
private MavenProject project;
Then you can use the following to get a list of plugins used in this build.
mavenProject.getBuildPlugins()
You can iterate though this list until you find the plugin from which you want to extract configuration.
Finally, you can get the configuration as a Xpp3Dom.
plugin.getConfiguration()
Note: If your altering the other plugins configuration (rather than just extracting information), it will only remain altered for the current phase and not subsequent phases.
A: Using properties is certainly one way to go, however not ideal. It still requires a user to define the ${propertyName} in multiple places throughout the pom. I want to allow my plugin to work with no modifications to the user's pom, other than the plugin definition itself.
I don't see accessing the runtime properties of another MOJO as too tight coupling. If the other MOJO is defined anywhere in the build hierarchy, I want my MOJO to respect the same configuration.
My current solution is:
private Plugin lookupPlugin(String key) {
List plugins = getProject().getBuildPlugins();
for (Iterator iterator = plugins.iterator(); iterator.hasNext();) {
Plugin plugin = (Plugin) iterator.next();
if(key.equalsIgnoreCase(plugin.getKey())) {
return plugin;
}
}
return null;
}
/**
* Extracts nested values from the given config object into a List.
*
* @param childname the name of the first subelement that contains the list
* @param config the actual config object
*/
private List extractNestedStrings(String childname, Xpp3Dom config) {
final Xpp3Dom subelement = config.getChild(childname);
if (subelement != null) {
List result = new LinkedList();
final Xpp3Dom[] children = subelement.getChildren();
for (int i = 0; i < children.length; i++) {
final Xpp3Dom child = children[i];
result.add(child.getValue());
}
getLog().info("Extracted strings: " + result);
return result;
}
return null;
}
This has worked for the few small builds I've tested with. Including a multi-module build.
A: I'm not sure how you would do that exactly, but it seems to me that this might not be the best design decision. If at all possible you should aim to decouple your Mojo from any other plugins out there.
Instead I would recommend using custom properties to factor out any duplication in the configuration of separate plugins.
You can set a custom property "foo" in your pom by using the properties section:
<project>
...
<properties>
<foo>value</foo>
</properties>
...
</project>
The property foo is now accessible anywhere in the pom by using the dollar sign + curly brace notation:
<somePluginProperty>${foo}</somePluginProperty>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125389",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
}
|
Q: Interrupts and exceptions I've seen several question on here about exceptions, and some of them hint at interrupts as exceptions, but none make the connection clear.
*
*What is an interrupt?
*What is an exception? (please explain what exceptions are for each language you know, as there are some differences)
*When is an exception an interrupt and vice-versa?
A: Interrupts indicate that something external to the processor core requires it's attention. It interrupts the normal flow of the program, executes an Interrupt Service Routine (ISR) and generally returns to where it was before the interrupt occurred.
There are lots of variation on this basic theme: interrupts might be generated by software, another task might get the CPU after the ISR, etc.. The key point is that interrupts can occur at any time for a reason the code/CPU has no control over.
An exception is a bit trickier to define because it has potentially three levels of meaning:
Hardware Exceptions
Certain processors (PowerPC for example) define exceptions to indicate that some sort of unusual condition has occurred: System Reset, Invalid Address, some virtual address translation cache miss, etc...
These exceptions are also used to implement breakpoints and system calls. In this case, they act almost like interrupts.
OS Exceptions
Some of the hardware exceptions will the handled by the OS. For example, your program accesses invalid memory. This will cause a hardware exception. The OS has a handler for that exception, and odds are that the OS will send a signal to your application (SIGSEGV for example) denoting there is a problem.
If your program has a signal handler installed, the signal handler will run and hopefully deal with the situation. If you don't have a signal handler, the program can be terminated or suspended.
I would consider window's Structured Exception Handlers (SEH) to be this type of exceptions.
Software Exceptions
Some languages like Java, C++ and C# have the concept of software exceptions, where the language provides for the handling of unforeseen or unusual conditions related to the operation of the program. In this case, an exception is raised at some point in the code and some code higher up on the program execution stack would "catch" the exception and execute. This is what try/catch blocks do.
A: An interupt is a CPU signal generated by hardware, or specific CPU instructions. These cause interupt handlers to be executed. Things such as I/O signals from I/O hardware generate interupts.
An exception can be thought of as a software-version of an interupt, that only affects its process.
I'm not sure on the exact details, but an exception could be implemented by an interupt.
A: I'm going to elaborate on what an interrupt is because there's one critical type of interrupt nobody has dealt with yet: the timer.
But first, let me back up. When you get an interrupt, your interrupt handler (which lives in kernelspace) runs, which typically disables interrupts, sees to any pending business (handling the packet that just arrived on the network, processing the keystroke, etc.) and then (remember we're still in the kernel at this point) figures out what process is supposed to run next (could be the same one, could be a different one, depends on the scheduler) and then run it.
Only one process runs on the processor at any given time. And when you are using a multitasking OS, the way it switches between them is called a context switch - basically the registers of the processor get dumped to memory, flow passes to the new process, and when the process is done you context switch to something else.
So, let's say I write a simple C program that counts all numbers, or the Fibonacci sequence, or something else without stopping. Or even better: does nothing but spins inside of a while(1) loop. How do the other processes on the system get a chance to run? What if there is nothing happening to cause an interrupt?
The answer is that you have a timer device that is constantly interrupting. And it is what keeps a spinning process from taking down the entire system. Although I will note that the interrupt handlers disable interrupts, so if you do something that blocks indefinitely you can take down the entire system.
A: Exception
An exception is when the processor executes code that is not on its normal path. This is an 'exception' to normal operation, which is essentially linear movement through code and control structures. Different languages have support for various types of exceptions, typically used to handle errors during program operation.
Interrupt
An interrupt is an exception at the hardware level (generally). The interrupt is a physical signal in the processor that tells the CPU to store its current state and jump to interrupt (or exception) handler code. Once the handler is done the original state is restored and processing can continue.
An interrupt is always an exception, even when it's intended. Interrupts might indicate:
*
*errors, such as a memory access violation
*that the OS needs to perform an operation to support a running program, such as a software interrupt, or a memory paging request
*a hardware device requires attention, such as a received network packet, or an empty transmit buffer
These always force the processor to pause its current activity to deal with the raised exception, only resuming once the interrupt handler is complete.
Pitfalls
In terms of interrupts, common pitfalls are race conditions. For instance you might have an interrupt that periodically increments a global realtime clock. The clock might be 64 bits on a 32 bit machine.
If a program is reading the clock, and gets the first 32 bit word, then the interrupt occurs, once the interrupt handler exits the process gets the second 32 bit word, and the data will be incoherent - the two words may be out of sync. If you attempt to use a mutex or semaphore to lock the variable in the process, then the interrupt will hang waiting for the lock and halt the system (deadlock), unless both the handler and the processes that use the data are written very carefully. It's easy to get in trouble when writing for interrupts.
Re-entrant functions are also another problem. If you are executing funcA in program code, take an interrupt which also executes funcA you may end up with unintended consequences due to shared variables (static, or heap variables, classes, etc). You typically want to execute as little code as possible in the interrupt handler, and frequently have it set a flag so the process can do the real work later, without worrying about conflicts.
In some ways this is similar to developing for a multiprocessor, and is one of the reasons why kernel programming is still considered black magic by many.
-Adam
A: Interrupts are expected to occur regularly (although sometimes they are not regular).. they interrupt the cpu because something important just happened and it needs to be taken care of immediately.
Exceptions are supposed to be exceptions to the rule; these are thrown by software because something unexpected happened and this is your chance to try to do something about it, or at the very least crash gracefully.
A: Your processor is going to have a number of external interrupt pins. Typically these pins are connected to hardware and are used to indicate when some external event occurs. For example, if you are using a serial port the UART will raise raise a pin that is connected to one of the interrupt pins on the processor to indicate that a byte has been received.
Other peripherals like timers, usb controllers, etc. will also generate interrupts on the basis of some external event.
When the processor receives a signal on one of it's external interrupt pins it will immediately jump to some nominated location in memory and start executing. The code executed is typically called an ISR, or interrupt service routine. Unless you're implementing drivers or doing embedded software of some sort it's unlikely that you'll ever come across ISRs.
Unfortunately the answer to the question about exceptions is a little less clear - there have been 3 different meanings listed in other answers on this page.
Ron Savage's answer refers to the software construct. This is purely an application level exception, where a piece of code is able to indicate an error that can be detected by some other piece of code. There is no hardware involvement here at all.
Then there is the exception as seen by a task. This is an operating system level construct that is used to kill a task when it does something illegal - like divide by 0, illegally accessing memory etc.
And thirdly, there is the hardware exception. In terms of behaviour it is identical to an interrupt in that the processor will immediately jump to some nominated memory location and start executing. Where an exception differs from an interrupt is that an exception is caused by some illegal activity that the processor has detected. For example the MMU on the processor will detect illegal memory accesses and cause an exception. These hardware exceptions are the initial trigger for the operating system to perform it's cleanup tasks (as described in the paragraph above).
A: When you are talking about interrupts and exceptions you are generally talking close to hardware level code and interrupts and exceptions are often implemented in part by hardware and part in software.
An interrupt is an event in hardware (or manually fired in assembly) that is associated with a vector of handlers that can be used to handle the interrupt's event, be it IO Completion, IO Error (Disk Memory Failure), IO Event (Mouse Move for example). The interrupts can give rise to exceptions often when some unexpected interrupt occurs.
An exception is an unexpected behavior, most often when using the hardware these come from an interrupt and are handled separately in the software using an interrupt handler. Programming languages as we see them almost always disguise this as a control structure of some kind.
A: In general, an interrupt is a hardware implemented trap of some sort. You register a handler for a specific interrupt (division by 0, data available on a peripheral, timer expired) and when that event happens, all processing system-wide halts, you quickly process the interrupt, and things continue on. These are usually implented in a device driver or the kernel.
An exception is a software implemented way of handling errors in code. You set up a handler for specific (or general) exceptions. When an exception occurs, the languages run-time will start unwinding the stack until it reaches a handler for that specific handler. At that point you can handle the exception and continue on, or quit your program.
A: Iterrupts are basically hardware driven, like your printer indiciating it is "out of paper" or the network card indicating it has lost connection.
An exception is simply an error condition in your program, detected by a try / catch block. Like:
Try
{
... various code steps that "throw exceptions" on error ...
}
catch (exception e)
{
print 'Crap! Something bad happened.' + e.toString()
}
It's a handy way to catch "any error" that happens in a block of code so you can handle them in a similar fashion.
A: Keeping things simple...
When you are done handling an interrupt, you (normally) return to what you were doing before you were interrupted.
Handling an exception involves throwing away successive layers of what you are currently working on until you bubble up to a point where the exception can be handled (caught).
While handling an interrupt, you may decide to throw an exception, but that doesn't mean you have to consider the interrupt itself as an exception. Exceptions do not "interrupt" (since that would imply the possibility of returning to what you were doing just before you were interrupted); rather they "abort" (some subset of) your current activity.
And, as noted several times already, interrupts are usually triggered by outside entities such as hardware or users (such as a mouse click or keystroke like CTRL-C) while exceptions are generated (thrown) synchronously by software detecting a "problem" or "exceptional condition".
A: Interrupts are generated by devices external to the CPU (timer tick, disk operation completion, network packet arrival, etc.) and are asynchronous with program execution. Exceptions are synchronous with program execution (e.g. division by zero, accessing an invalid address).
Unless your program is executing without an operating system (or you are developing an OS), it will never see a raw exception/interrupt. They are caught by the OS and handled by it (interrupts), or converted to some other form before being reflected back to the user program (e.g. signals on UNIX, structured exception handling (SEH) on Windows) where it has a chance of handling it.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125394",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
}
|
Q: How can I dynamically switch web service addresses in .NET without a recompile? I have code that references a web service, and I'd like the address of that web service to be dynamic (read from a database, config file, etc.) so that it is easily changed. One major use of this will be to deploy to multiple environments where machine names and IP addresses are different. The web service signature will be the same across all deployments, just located elsewhere.
Maybe I've just been spoiled by the Visual Studio "Add Web Reference" wizard - seems like this should be something relatively easy, though.
A: I've struggled with this issue for a few days and finally the light bulb clicked. The KEY to being able to change the URL of a webservice at runtime is overriding the constructor, which I did with a partial class declaration. The above, setting the URL behavior to Dynamic must also be done.
This basically creates a web-service wrapper where if you have to reload web service at some point, via add service reference, you don't loose your work. The Microsoft help for Partial classes specially states that part of the reason for this construct is to create web service wrappers. http://msdn.microsoft.com/en-us/library/wa80x488(v=vs.100).aspx
// Web Service Wrapper to override constructor to use custom ConfigSection
// app.config values for URL/User/Pass
namespace myprogram.webservice
{
public partial class MyWebService
{
public MyWebService(string szURL)
{
this.Url = szURL;
if ((this.IsLocalFileSystemWebService(this.Url) == true))
{
this.UseDefaultCredentials = true;
this.useDefaultCredentialsSetExplicitly = false;
}
else
{
this.useDefaultCredentialsSetExplicitly = true;
}
}
}
}
A: When you generate a web reference and click on the web reference in the Solution Explorer. In the properties pane you should see something like this:
Changing the value to dynamic will put an entry in your app.config.
Here is the CodePlex article that has more information.
A: Change URL behavior to "Dynamic".
A: As long as the web service methods and underlying exposed classes do not change, it's fairly trivial. With Visual Studio 2005 (and newer), adding a web reference creates an app.config (or web.config, for web apps) section that has this URL. All you have to do is edit the app.config file to reflect the desired URL.
In our project, our simple approach was to just have the app.config entries commented per environment type (development, testing, production). So we just uncomment the entry for the desired environment type. No special coding needed there.
A: If you are truly dynamically setting this, you should set the .Url field of instance of the proxy class you are calling.
Setting the value in the .config file from within your program:
*
*Is a mess;
*Might not be read until the next application start.
If it is only something that needs to be done once per installation, I'd agree with the other posters and use the .config file and the dynamic setting.
A: Just a note about difference beetween static and dynamic.
*
*Static: you must set URL property every time you call web service. This because base URL if web service is in the proxy class constructor.
*Dynamic: a special configuration key will be created for you in your web.config file. By default proxy class will read URL from this key.
A: I know this is an old question, but our solution is much simpler than what I see here. We use it for WCF calls with VS2010 and up. The string url can come from app settings or another source. In my case it is a drop down list where the user picks the server. TheService was configured through VS add service reference.
private void CallTheService( string url )
{
TheService.TheServiceClient client = new TheService.TheServiceClient();
client.Endpoint.Address = new System.ServiceModel.EndpointAddress(url);
var results = client.AMethodFromTheService();
}
A: If you are fetching the URL from a database you can manually assign it to the web service proxy class URL property. This should be done before calling the web method.
If you would like to use the config file, you can set the proxy classes URL behavior to dynamic.
A: Definitely using the Url property is the way to go. Whether to set it in the app.config, the database, or a third location sort of depends on your configuration needs. Sometimes you don't want the app to restart when you change the web service location. You might not have a load balancer scaling the backend. You might be hot-patching a web service bug. Your implementation might have security configuration issues as well. Whether it's production db usernames and passwords or even the ws security auth info. The proper separation of duties can get you into some more involved configuration setups.
If you add a wrapper class around the proxy generated classes, you can set the Url property in some unified fashion every time you create the wrapper class to call a web method.
A: open solition explorer
right click the webservice change URL Behavior to Dynamic
click the 'show all files' icon in solution explorer
in the web reference edit the Reference.cs file
change constructer
public Service1() {
this.Url = "URL"; // etc. string variable this.Url = ConfigClass.myURL
}
A: For me a Reference to a WebService is a
SERVICE REFERENCE
.
Anyway it's very easy. As someone said, you just have to change the URL in the web.config file.
<system.serviceModel>
<bindings>
<basicHttpBinding>
<binding name="YourServiceSoap" />
</basicHttpBinding>
</bindings>
<client>
**** CHANGE THE LINE BELOW TO CHANGE THE URL ****
<endpoint address="http://10.10.10.100:8080/services/YourService.asmx"
binding="basicHttpBinding" bindingConfiguration="YourServiceSoap"
contract="YourServiceRef.YourServiceSoap" name="YourServiceSoap" />
</client>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125399",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "72"
}
|
Q: Generic LINQ query predicate? Not sure if this is possible or if I'm expressing correctly what I'm looking for, but I have the following piece of code in my library repeatedly and would like to practice some DRY.
I have set of SQL Server tables that I'm querying based on a simple user-supplied search field ala Google. I'm using LINQ to compose the final query based on what's in the search string. I'm looking for a way to use generics and passed in lambda functions to create a reusable routine out of this:
string[] arrayOfQueryTerms = getsTheArray();
var somequery = from q in dataContext.MyTable
select q;
if (arrayOfQueryTerms.Length == 1)
{
somequery = somequery.Where<MyTableEntity>(
e => e.FieldName.StartsWith(arrayOfQueryTerms[0]));
}
else
{
foreach(string queryTerm in arrayOfQueryTerms)
{
if (!String.IsNullOrEmpty(queryTerm))
{
somequery = somequery
.Where<MyTableEntity>(
e => e.FieldName.Contains(queryTerm));
}
}
}
I was hoping to create a generic method with signature that looks something like:
private IQueryable<T> getQuery(
T MyTableEntity, string[] arrayOfQueryTerms, Func<T, bool> predicate)
I'm using the same search strategy across all my tables, so the only thing that really differs from usage to usage is the MyTable & MyTableEntity searched and the FieldName searched. Does this make sense? Is there a way with LINQ to dynamically pass in the name of the field to query in the where clause? Or can I pass in this as a predicate lambda?
e => e.FieldName.Contains(queryTerm)
I realize there a million and a half ways to do this in SQL, probably easier, but I'd love to keep everything in the LINQ family for this one. Also, I feel that generics should be handy for a problem like this. Any ideas?
A: What it sounds like is you want basically a conditional predicate builder..
I hope you can mold this into something you are looking for, good luck!
http://www.albahari.com/nutshell/predicatebuilder.aspx
A: You might want to look at expression trees:
IQueryable<T> getQuery<T>(T myTableEntity, string[] arrayOfQueryTerms, Expression<Func<T, bool>> predicate)
{ var fieldOrProperty = getMemberInfo(predicate);
/* ... */
}
MemberInfo getmemberInfo<T>(Expression<Func<T,bool> expr)
{ var memberExpr = expr as MemberExpression;
if (memberExpr != null) return memberExpr.Member;
throw new ArgumentException();
}
var q = getQuery<FooTable>(foo, new[]{"Bar","Baz"}, x=>x.FieldName);
A: It sounds like you're looking for Dynamic Linq. Take a look here. This allows you to pass strings as arguments to the query methods, like:
var query = dataSource.Where("CategoryID == 2 && UnitPrice > 3")
.OrderBy("SupplierID");
Edit: Another set of posts on this subject, using C# 4's Dynamic support: Part 1 and Part 2.
A: I suggest trying the Gridify library.
It is much easier to work with also it has better performance.
usage example:
query = dataSource.ApplyFiltering("FiledName=John");
this is a performance comparison between dynamic LINQ libraries.
A: I recently had to do this same thing. You will need Dynamic Linq here is a way to keep this strongly typed.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125400",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
}
|
Q: How do you remotely update Java applications? We've got a Java server application that runs on a number of computers, all connected to the Internet, some behind firewalls. We need to remotely update the JAR files and startup scripts from a central site, with no noticeable interruption to the app itself.
The process has to be unattended and foolproof (i.e. we can't afford to break the app due to untimely internet outages).
In the past we've used a variety of external scripts and utilities to handle similar tasks, but because they have their own dependencies, the result is harder to maintain and less portable. Before making something new, I want to get some input from the community.
Has anyone found a good solution for this already? Got any ideas or suggestions?
Just to clarify: This app is a server, but not for web applications (no webapp containers or WAR files here). It's just an autonomous Java program.
A: You should definitely take a look at OSGi, it was created just for these cases (especially for embedded products) and is used by a large number of companies. You can update jar "bundles", add and remove them, while the app is running. I haven't used it myself, so I don't know about the quality of the open source frameworks/servers, but here is a bunch of useful links to get you started:
http://www.osgi.org/Main/HomePage
http://www.aqute.biz/Code/Bnd
http://blog.springsource.com/2008/02/18/creating-osgi-bundles/
http://blog.springsource.com/
http://www.knopflerfish.org/
http://felix.apache.org/site/index.html
A: I'd recommend Capistrano for multi-server deployment. While it is built for deploying Rails apps, I've seen it used successfully to deploy Java applications.
Link:
Capistrano 2.0 Not Just for Rails
A: It's very hard to make the update atomic, especially if you have any database updating to do.
But, if you don't, what you can do is first make sure your application can run from a relative path. That is, you can put your app in some directory, and all of the important files are found relative to that location, so that that your actual installation location is not really important.
Next, duplicate your installation. Now, you have the "running" version and you have the "new" version.
Update the "new" version using whatever tech you like (FTP, rsync, paper tape, whatever floats your boat).
Verify your installation (checksums, quick unit tests, whatever you need -- even start it up on a test port if you like).
When you are happy with the new installation, down the original running instance, RENAME the original directory (mv application application_old), rename the NEW directory (mv application_new application), and start it back up.
Your down time is reduced to server shut down and start up time (since rename is "free").
If, by chance, you detect a critical error, you have your original version still there. Stop the new server, rename it back, restart the old. Very fast fall back.
The other nice thing is that your service infrastructure is static (like your rc scripts, cron jobs, etc.) since they point to the "application" directory, and it doesn't change.
It can also be done with soft links instead of renaming directories. either way is fine.
But the technique is simple, and near bullet proof if your application cooperates.
Now, if you have DB changes, well, that's completely different nasty issue. Ideally if you can make your DB changes "backward compatible", then hopefully the old application version can run on the new schema, but that's not always possible.
A: Jars cannot be modified while the JVM is running on top of it and will result in errors. I have tried similar tasks and the best I came up with is making a copy of the updated Jar and transition the start up script to look at that Jar. Once you have the updated Jar, start it up and wait on the old Jar to end after giving it the signal to do so. Unfortunately this means a loss of GUI etc. for a sec but serializing most of the structures in java is easy and the current GUI could be transferred to the updated application before actually closing (some things may not be serializable though!).
A: I would use ansible to distribute jars to multiple servers and run update scripts.
For an update to be unnoticeable for users the application would have to stop the old instance and bring up a new one very quickly, in which java apps aren't very good.
There are ways to make update without any downtime, but none of them is free. Choosing a solution you should consider what downtime is acceptable to you and what cost can you accept to achieve it.
I see two options:
*
*Rolling update, which means each time you update you spin up an instance with the new version, then validate if it is working correctly, and if it is, you kill the old version. This solution will require some kind of proxy (eg. haproxy) which will direct the traffic to the valid instances.
*Optimizing application start-up time.
As Will Hartung mentioned you will also have to consider external dependencies like database schema which you might also want to update.
In order to get seamless updates, you might also want to do a rolling update on a database. It would require you to do not introduce any breaking changes between any two consecutive releases of the application. Eg. when you want to delete a column, in the first release you remove all references to the column in an application and in next release you are allowed to delete the column in the database.
A: I believe you can hot-deploy JAR files if you use an OSGi-based app server like SpringSource dm Server. I've never used it myself, but knowing the general quality of the Spring portfolio, I'm sure it's worth a look.
A: You didn't specify the type of server apps - I'm going to assume that you aren't running web apps (as deploying a WAR already does what you are talking about, and you very rarely need a web app to do pull type updates. If you are talking about a web app, the following discussion can still apply - you'll just implement the update check and ping-pong for the WAR file instead of individual files).
You may want to take a look at jnlp - WebStart is based on this (this is a client application deployment technology), but I'm pretty sure that it could be tailored to performing updates for a server type app. Regardless, jnlp does a pretty good job of providing descriptors that can be used for downloading required versions of required JARs...
Some general thoughts on this (we have several apps in the same bucket, and are considering an auto-update mechanism):
*
*Consider having a bootstrap.jar file that is capable of reading a jnlp file and downloading required/updated jars prior to launching the application.
*JAR files can be updated even while an app is running (at least on Windows, and that is the OS most likely to hold locks on running files). You can run into problems if you are using custom class loaders, or you have a bunch of JARs that might be loaded or unloaded at any time, but if you create mechanisms to prevent this, then overwriting JARs then re-launching the app should be sufficient for update.
*Even though it is possible to overwrite JARs, you might want to consider a ping-pong approach for your lib path (if you don't already have your app launcher configured to auto-read all jar files in the lib folder and add them to the class path automatically, then that's something you really do want to do). Here's how ping-pong works:
App launches and looks at lib-ping\version.properties and lib-pong\version.properties and determines which is newer. Let's say that lib-ping has a later version. The launcher searches for lib-ping*.jar and adds those files to the CP during the launch. When you do an update, you download jar files into lib-pong (or copy jar files from lib-ping if you want to save bandwidth and the JAR didn't actually change - this is rarely worth the effort, though!). Once you have all JARs copied into lib-pong, the very last thing you do is create the version.properties file (that way an interrupted update that results in a partial lib folder can be detected and purged). Finally, you re-launch the app, and bootstrap picks up that lib-pong is the desired classpath.
*ping-pong as described above allows for a roll-back. If you design it properly, you can have one piece of your app that you test the heck out of and then never change that checks to see if it should roll-back a given version. That way if you do mess up and deploy something that breaks the app, you can invalidate the version. This part of the application just has to delete the version.properties file from the bad lib-* folder, then re-launch. It's important to keep this part dirt simple because it's your fail safe.
*You can have more than 2 folders (instead of ping/pong, just have lib-yyyymmdd and purge all but the newest 5, for example). This allows for more advanced (but more complicated!) rollback of JARs.
A: We use Eclipse which the update system of OSGi, and our experience is very good.
Recommended!
A: The latest version of Java Web Start allows for injecting an application in the local cache without actually invoking the program and it can be marked as "offline". Since the cache is what is used to invoke the program, it will be updated for the next run only. For this to work you will most likely need jars with version numbers in their name (e.g. our-library-2009-06-01.jar).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125409",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
}
|
Q: PHP and Dreamweaver Integration This is an issue I've run into with a website I'm designing. I'm using a template based design for the site, so the information on the page is thrown inside the template when the page is accessed. The information shown changes based on the page attribute passed in as a GET request.
So, to actually load the information into the body area of the page, the given PHP script needs to place that information in a $bodyout variable when it ends. This is fine and good, but the information then shows up as a cute yellow box in dreamweaver, making WYSIWYG editing of the site not possible.
Ideally, for most of the pages on the site, I'd like for my secratary to be able to go in there in Dreamweaver to edit the page, since she doesn't know PHP at all. Is there an elegent way to have a PHP script show the data it is going to output in Dreamweaver? (Especially if that data is static)
EDIT I should clarify: By template-based design, I don't mean any particular program. I just have the "layout" HTML/CSS in one script, and that script fills itself with the content. The whole thing is PHP so far, no third party programs involved.
A: in short no. if you're using a program that is able to use a different template engine though, it can possibly at least produce valid HTML which won't cause dreamweaver to freak out. it sounds like you might not be the developer of the application generating the templates, but if you're interested, this site explains a bunch of different ideas on templating.
A: I did a very similar thing and I ended up just setting up a web based visual editor. the client could edit the main body of some pages online. That may be your best solution. unfortunetly I don't remember the name of the editor that I could set to edit only certain parts of the page. I will respond again when i find it.
your other option is to use the Dreamweaver templates and set the PHP to load the whole page, header and all. Although Dreamweaver templates are fairly frail and i've had issues with them.
A: Dreamweaver is never going to integrate well into that kind of situation, but it sounds like it might be overkill for your needs anyway, as it's really better suited for page-layout type jobs.
From what you've said, a simple Content Management System with an inline editor like TinyMCE (or whatever) would probably suffice.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125427",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Protecting cells in Excel but allow these to be modified by VBA script I am using Excel where certain fields are allowed for user input and other cells are to be protected. I have used Tools Protect sheet, however after doing this I am not able to change the values in the VBA script. I need to restrict the sheet to stop user input, at the same time allow the VBA code to change the cell values based on certain computations.
A: A basic but simple to understand answer:
Sub Example()
ActiveSheet.Unprotect
Program logic...
ActiveSheet.Protect
End Sub
A: You can modify a sheet via code by taking these actions
*
*Unprotect
*Modify
*Protect
In code this would be:
Sub UnProtect_Modify_Protect()
ThisWorkbook.Worksheets("Sheet1").Unprotect Password:="Password"
'Unprotect
ThisWorkbook.ActiveSheet.Range("A1").FormulaR1C1 = "Changed"
'Modify
ThisWorkbook.Worksheets("Sheet1").Protect Password:="Password"
'Protect
End Sub
The weakness of this method is that if the code is interrupted and error handling does not capture it, the worksheet could be left in an unprotected state.
The code could be improved by taking these actions
*
*Re-protect
*Modify
The code to do this would be:
Sub Re-Protect_Modify()
ThisWorkbook.Worksheets("Sheet1").Protect Password:="Password", _
UserInterfaceOnly:=True
'Protect, even if already protected
ThisWorkbook.ActiveSheet.Range("A1").FormulaR1C1 = "Changed"
'Modify
End Sub
This code renews the protection on the worksheet, but with the ‘UserInterfaceOnly’ set to true. This allows VBA code to modify the worksheet, while keeping the worksheet protected from user input via the UI, even if execution is interrupted.
This setting is lost when the workbook is closed and re-opened. The worksheet protection is still maintained.
So the 'Re-protection' code needs to be included at the start of any procedure that attempts to modify the worksheet or can just be run once when the workbook is opened.
A: I don't think you can set any part of the sheet to be editable only by VBA, but you can do something that has basically the same effect -- you can unprotect the worksheet in VBA before you need to make changes:
wksht.Unprotect()
and re-protect it after you're done:
wksht.Protect()
Edit: Looks like this workaround may have solved Dheer's immediate problem, but for anyone who comes across this question/answer later, I was wrong about the first part of my answer, as Joe points out below. You can protect a sheet to be editable by VBA-only, but it appears the "UserInterfaceOnly" option can only be set when calling "Worksheet.Protect" in code.
A: As a workaround, you can create a hidden worksheet, which would hold the changed value. The cell on the visible, protected worksheet should display the value from the hidden worksheet using a simple formula.
You will be able to change the displayed value through the hidden worksheet, while your users won't be able to edit it.
A: Try using
Worksheet.Protect "Password", UserInterfaceOnly := True
If the UserInterfaceOnly parameter is set to true, VBA code can modify protected cells.
Note however that this parameter does not stick. It needs to be reapplied each time the file is opened.
A: I selected the cells I wanted locked out in sheet1 and place the suggested code in the open_workbook() function and worked like a charm.
ThisWorkbook.Worksheets("Sheet1").Protect Password:="Password", _
UserInterfaceOnly:=True
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125449",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "68"
}
|
Q: Implementation example for Repository pattern with Linq to Sql and C# I am looking for a Repository pattern implementation example/resource that follows domain driven design principles for my ASP.net MVC application. Does anyone have a good example or learning resource that can be shared?
A: It's not an uncontroversial implementation, but Rob Conery's web storefront project has implemented repository via Linq to Sql in C#.
http://blog.wekeroad.com/
Source is available.
He's not quite doing strict DDD, but his TDD is generally sending him out in that direction. The one caveat is that he has multiple repositories with no aggregate roots, so it's far from a textbook example. Also, earlier implementations of the repository returned IQueryable, so there were no domain boundaries on the repository, which is the source of most of the noise made about his design.
A: Domain Driven Design by Eric Evans is a great place to learn all about the Repository pattern and more. http://dddcommunity.org/books/
A: Here is an article describing an implementation of the repository pattern using Linq to SQL. The full code is open source, available @ github.
http://www.macskeptic.com/living/by/the/code/c/2009/07/02/the-repository-pattern/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125453",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
}
|
Q: What is the T-SQL syntax to connect to another SQL Server? If I need to copy a stored procedure (SP) from one SQL Server to another I right click on the SP in SSMS and select Script Stored Procedure as > CREATE to > New Query Editor Window. I then change the connection by right clicking on that window and selecting Connection > Change Connection... and then selecting the new server and F5 to run the create on the new server.
So my question is "What is the T-SQL syntax to connect to another SQL Server?" so that I can just paste that in the top of the create script and F5 to run it and it would switch to the new server and run the create script.
While typing the question I realized that if I gave you the back ground to what I'm trying to do that you might come up with a faster and better way from me to accomplish this.
A: In SQL Server Management Studio, turn on SQLCMD mode from the Query menu.
Then at the top of your script, type in the command below
:Connect server_name[\instance_name] [-l timeout] [-U user_name [-P password]
If you are connecting to multiple servers, be sure to insert GO between connections; otherwise your T-SQL won't execute on the server you're thinking it will.
A: If I were to paraphrase the question - is it possible to pick server context for query execution in the DDL - the answer is no. Only database context can be programmatically chosen with USE. (having already preselected the server context externally)
Linked server and OPEN QUERY can give access to the DDL but require somewhat a rewrite of your code to encapsulate as a string - making it difficult to develop/debug.
Alternately you could resort to an external driver program to pickup SQL files to send to the remote server via OPEN QUERY. However in most cases you might as well have connected to the server directly in the 1st place to evaluate the DDL.
A: Also, make sure when you write the query involving the linked server, you include brackets like this:
SELECT * FROM [LinkedServer].[RemoteDatabase].[User].[Table]
I've found that at least on 2000/2005 the [] brackets are necessary, at least around the server name.
A: Whenever we are trying to retrieve any data from another server we need two steps.
First step:
-- Server one scalar variable
DECLARE @SERVER VARCHAR(MAX)
--Oracle is the server to which we want to connect
EXEC SP_ADDLINKEDSERVER @SERVER='ORACLE'
Second step:
--DBO is the owner name to know table owner name execute (SP_HELP TABLENAME)
SELECT * INTO DESTINATION_TABLE_NAME
FROM ORACLE.SOURCE_DATABASENAME.DBO.SOURCE_TABLE
A: If you are connecting to multiple servers you should add a 'GO' before switching servers, or your sql statements will run against the wrong server.
e.g.
:CONNECT SERVER1
Select * from Table
GO
enter code here
:CONNECT SERVER1
Select * from Table
GO
http://www.sqlmatters.com/Articles/Changing%20the%20SQL%20Server%20connection%20within%20an%20SSMS%20Query%20Windows%20using%20SQLCMD%20Mode.aspx
A: Update: for connecting to another sql server and executing sql statements, you have to use sqlcmd Utility. This is typically done in a batch file.
You can combine this with xmp_cmdshell if you want to execute it within management studio.
one way is to configure a linked server. then you can append the linked server and the database name to the table name. (select * from linkedserver.database.dbo.TableName)
USE master
GO
EXEC sp_addlinkedserver
'SEATTLESales',
N'SQL Server'
GO
A: Try creating a linked server (which you can do with sp_addlinkedserver) and then using OPENQUERY
A: on my C drive I first create a txt file to create a new table. You can use what ever you want in this text file
in this case the text file is called "Bedrijf.txt"
the content:
Print 'START(A) create table'
GO 1
If not EXISTS
(
SELECT *
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME = 'Bedrijf'
)
BEGIN
CREATE TABLE [dbo].[Bedrijf] (
[IDBedrijf] [varchar] (38) NOT NULL ,
[logo] [varbinary] (max) NULL ,
[VolledigeHandelsnaam] [varchar] (100) NULL
) ON [PRIMARY]
save it
then I create an other txt file with the name "Bedrijf.bat" and the extension bat.
It's content:
OSQL.EXE -U Username -P Password -S IPaddress -i C:Bedrijf.txt -o C:Bedrijf.out -d myDatabaseName
save it and from explorer double click to execute
The results will be saved in a txt file on your C drive with the name "Bedrijf.out"
it shows
1> 2> 3> START(A) create table
if all goes well
That's it
A: Try PowerShell Type like:
$cn = new-object system.data.SqlClient.SQLConnection("Data Source=server1;Initial Catalog=db1;User ID=user1;Password=password1");
$cmd = new-object system.data.sqlclient.sqlcommand("exec Proc1", $cn);
$cn.Open();
$cmd.CommandTimeout = 0
$cmd.ExecuteNonQuery()
$cn.Close();
A: If possible, check out SSIS (SQL Server Integration Services). I am just getting my feet wet with this toolkit, but already am looping over 40+ servers and preparing to wreak all kinds of havoc ;)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125457",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "61"
}
|
Q: GPU programming on Xbox 360 I'm looking for some insight into XNA on Xbox 360, mainly if its possible to run vector-based float mathematics on its GPU?
If there's a way, can you point me into the right direction?
A: I don't claim to be an expert on this, but hopefully this can point you in a helpful direction.
Is it possible? Yes. You probably already know that the GPU is good at such calculations (hence the question) and you can indeed control the GPU using XNA. Whether or not it will suit your needs is a different matter.
To make use of the GPU, you'll presumably want to write shaders using HLSL. There's a decent introduction to HLSL in the context of XNA at Reimers which you might want to run through. Notably, that tutorial focuses on making the GPU do graphics-related crunching, but what you write in the shaders is up to you. If your vector-based float math is for the purpose of rendering (and thus can stay in the GPU domain), you're in luck and can stop here.
Likely rendering onscreen is not what you're after. Now, you have a fair amount of flexibility in HLSL as far as doing your math goes. Getting the results out to the CPU, however, is not how the system was designed. This is getting fuzzy for me, but Shawn Hargreaves (an XNA dev) states on more than one occasion that getting output from the GPU (other than rendered onscreen) is non-trivial and has performance implications. Retrieving data involves a call to GetData which will cause a pipeline stall.
So it can be done. The XNA framework will let you write shaders for the 360 (which supports Shader Model 3.0 plus a few extensions) and it is possible to get those results out, though it may not be efficient enough for your needs.
A: As was stated above - XBox360 is fully capable of any HLSL calculation and specifically, it can handle Vertex and Pixel shader model 3 instructions and has an enhanced set of instructions that are specific to the platform.
Since HLSL is actually vector based you have all the tools you need - dot, cross, vector operations and matrix calculations.
If you want to send calculations to the GPU and edit/use the results on the CPU you can write to texture and then fetch it on the CPU side and decode it - using it for particles or physical interactions (such as water) are few of the occasions when you might want to do so.
A: I wonder if there were any new results (articles, source code etc.) on the subject matter (doing some form of CUDA-like computation) on Xbox(es). There was one research paper with a source code promise (LGP on GPGPU) - apparently gone with the wind - or maybe not ? :-) and one more general (LGP on Xbox 360) but both dependent on XNA.
Oh and for these two blog articles trying so hard to turn people away from God forbid :-) using Xbox for anything but yet another triangle coloring they still exist here : 1st, 2nd for all their tragicomic value of contrived "examples". No one really wanted to "draw a primitive" or anything so pedestrian, or go via XNA at all (if they could help it :-)
The XNA was meant to slow down things while creating the appearance of openness but that's also a historical note and the real questions is whether anyone has done anything along these lines. There are much stronger Xboxes these days but that may not mean much unless basic CUDA-like access has been relaxed.
The most tragic thing about the whole Xbox GPU usage blocking is that it was Xbox itself that ended up desperately needing just a bit of help from its own GPU for a thing that was magical and shining at the time and then got suffocated (Kinnect). All it needed was for Xbox API to open just a little door for basic CUDA-like computation and someone would have written efficient contour/skeleton constructor and smoother for free in the matter of months (including insiders, no open source "interruptions" :-) just because it was a little bit of magic.
Kinnect was initially promised something like 10% of the GPU (reducing 3D grayscale to contour and skeleton is image processing) and it didn't need more than a few percent (with efficient data loading and reading, grayscale 640x480 to 240 cores at 2ns per 3D op). Meaning that there was initial API which was removed/blocked - out of fear that some sword won't be "metalic enough" ? :-))
By the time MS opened at least the Kinnect "protocol" it was too late for everything (skeleton too shaky, too slow, no way to turn it off and reprocess raw data) but I can't help wondering if some people maybe continued doing something or maybe someone latter published some of that "forbidden" info.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125462",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: Bad OO design problem - I need some general functionality in Java but don't know how to implement it I'm developping a small UML Class editor in Java, mainly a personal project, it might end up on SourceForge if I find the time to create a project on it.
The project is quite advanced : I can create classes, move them around, create interfaces, create links, etc.
What I'm working on is the dialog box for setting class/interface properties and creating new classes/interfaces.
For example, I have a class that extends JDialog. This is the main "window" for editing classes and interfaces (well, there a class for each). It contains a JTabbedPane which in turn contain JPanels.
This JPanel are actually custom ones. I created an abstract class that extends JPanel. This class uses components (defined by its subclasses) and add their values to a JTable (also contained in the JPanel).
For example, if I want to edit a class' attributes, the JPanel will contain a JTextField for entering the name of the attribute as well as another one for entering its type. There is also a set of button for processing the data entered in these fields. When I click "Save", the data I entered in the JTextFields are added into the JTable (à la Enterprise Architect). The concreted class that extends the abstract one are responsible for defining control and deciding what do to with the data when a line is added or deleted from the JTable. The JTable management is, however, the responsability of the abstract class.
Here is my problem : in OO, a class has methods, and an interface has methods too. I told myself : I could use the same concrete custom JPanel (AttributesPanel (which extends the abstract JPanel class I created)) to store the methods for a class or and interface.
However, the class needs to keep a copy (as an attribute) of the class or interface I am working on. That way, when a method is added to it, I can call editedClass.addMethod() (or editedInterface.addMethod()). The problem is that I have no way of telling whether I work on a Class or and Interface.
The solution I found is ugly : keep an attribute editedClass and an attribute editedInterface in the AttributesPanel class. According to whether I am editing a class or interface, one of these attributes will be null while to other will not.
It is quite ugly if you ask me. In fact, I can hear my software engineering teachers in my head screaming in agony while burning (well, actually, freezing) in the ninth circle of Hell.
The quick way to fix this design problem would be to create an interface called "ObjectWithMethods", which my Class and Interface classes will implement. That way, I will only have to put an ObjectWithMethods parameter in my AttributesPanel class.
But does that mean that I should create a class named "ObjectWithAttributes", or "ObjectWithBlahBlah" ? I see some good "TheDailyWTF" potential here... Besides, I don't think I should modify my domain objects (a Class, Interface, Note, Relationship (for my UML editor)) or create an new Interface just for the sake of some UI consideration....
What do you think?
I you need more clarifications (because I am very tired right now and I tend to right quite badly (especially in English - my mother tongue is French) while in this state of mind...), just ask and I'll edit this question.
Cheers,
Guillaume.
A: When I read your question it really seems like you are describing a place to use the visitor pattern.
The reason the visitor pattern should work here is an idea known as double dispatch. Your UI code will make a call and pass a reference to itself, then the class or interface ends up calling the original caller. Since the class or interface is the one making the method call it knows its own type and how to do the work specific to its type.
Of course, my description is insufficient to implement this technique so you'll want to read up on it. I think it is well documented. For example, I found this in about 2 seconds in java that should get you started: http://www.javaworld.com/javaworld/javatips/jw-javatip98.html
A: Usually, I just do the most straightforward thing, and start thinking of factoring out interfaces, when I start to see too many if( .. instanceof ..)-like constructs in my code. It doesn't cost me much with modern IDE code-refactoring capabilities.
In your specific case, I would consider implementing diagrams provided in UML specification, because they were so kind to specify UML using UML-notation!
A: You have an application. In that app. your represent, and edit some data.
That data represents a programming language class or a programming language interface.
When you make an editor for some data, sometimes you have to add additional / complementary information, for example, each class chart may have a different line color, and doesn't have to do with the attributes or the methods of your class.
The same goes for the field or property that indicates wheter you are editing a class or an interface.
I suggestion do some things.
Separate the represented data from the code or logic of your program:
if you have something like:
// all code, classes, mixed up
public class JCustomPanel: {
protected ChartClass Charts;
protected ArrayList<String> MyClassAttributes;
protected ArrayList<String> MyClassMethods;
void PanelDoSomeThing();
void ClassDoSomeThing();
void InterfaceDoSomeThing();
// ...
} // class JCustomPanel
Change to this:
// things related to a single class or interface,
// nothing to do with the chart
public class JClassRepresentation: {
ArrayList<String> Attributes;
ArrayList<String> Methods;
bool IsInterface;
void ClassDoSomeThing();
void InterfaceDoSomeThing();
// ...
} // class JCustomPanel
// things related to the editor,
// contains the classes and interfaces,
// but, as separate stuff
public class JCustomPanel: {
ArrayList<JClassRepresentation> Classes;
int PagesCount;
void InterfaceDoSomeThing();
// ...
} // class JCustomPanel
Cheers.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125463",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: mobile page: dynamically created image sporadic loading i have a page for a small mobile site that has an image that has its src value set via a getImage.aspx method, wherein an image is dynamically built and returned to the image tag. Most of the time it works great. However, there are moments where the image just doesnt load on the first shot. Hitting refresh helps most of the time, but I'd like to have this image load every time. Anyone else have this issue and moved beyond it?
A: I have never had the issue with an dynamic image not loading without code errors. A suggestion would be to move the image generation to a handler instead of a page to avoid the additional overhead. It could be the async requests on the mobile devices getting limited.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125465",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Using glibc, why does my gethostbyname fail after I/DHCP has changed the DNS server? If our server (running on a device) starts before a DHCP lease had been acquired then it can never connect using a hostname.
If that happens it can find hosts by IP address but not by DNS.
I initially thought that the Curl DNS cache was at fault as the curl connections failed. But I used CURLOPT_DNS_CACHE_TIMEOUT to prevent curl from caching address but connections still failed.
A: It turns out that glibc gethostbyname_r won't automatically reload it's configuration if that configuration changes. You have to manually call res_init. See bug report below.
Note: Neither the man page for gethostbyname_r nor for rer_init mentioned this limitation.
My solution is very specific. It works for our long running server but it is not my ideal solution.
I have a function that checks the mtime of the /etc/resolv.conf against the last known mtime (0 for DNE). If the two mtime
differ then I call res_init. This function is called on program startup and then periodically to optionally reload the configuration.
The glibc bug report
libc caches resolv.conf forever
...
That's what res_init() is for, call it.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125466",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Best Way to Conditional Redirect? Using Rails v2.1, lets say you have an action for a controller that is accessible from more than one location. For example, within the Rails app, you have a link to edit a user from two different views, one on the users index view, and another from another view (lets say from the nav bar on every page).
I'm wondering what is the best way to redirect the user back to the right spot depending on what link they clicked on. For example:
Example 1:
*
*List all users
*Click "edit" on a user in the list
*User clicks "save" on the form, the controller redirects back to 1.
Example 2:
*
*The user could be on any page within the application, the nav bar shows a link to edit the current user
*The user clicks on the link to edit
*The user clicks "save" on the form, controller redirects back to whatever page they were on when the user clicked the "edit" link in the nav bar.
I've seen it done in the past by:
*
*Placing a parameter on the original edit link with the original controller/action in which the link appeared. To make this more DRY, you could use @controller.controller_name and @controller.action_name in a helper.
*The controller saves the parameters to a session variable.
*Once the controller has saved the record, it redirects to the session variable.
What I don't particularly like about this solution is the need to add the parameter to every applicable link in the views. I'm wondering if there's a way to build this all into the controller.
One better way I was thinking was to:
*
*Place a before_filter on the "edit" action to save the referrer (is this reliable enough?) into the session.
*When "update" is hit, the controller will redirect to the session variable and then delete the session variable.
Any thoughts on the best way to do this?
A: Flash: Data which you'll use only for the next request can be stored in the flash. Then you don't have to go about automatically clearing it. It works well for limited bits of application context which you're sure to only need once -- not just for error messages!
I know what you did last HTTP access: If you just need to redirect people to the last URL they were at, then just do that. request.referer will, for most browers which don't block the information, give you the last URL the person was at.
#in edit controller
...
flash[:page_to_redirect_to] = request.referer || "/my/default/path"
...
#in save controller
redirect_to flash[:page_to_redirect_to] || "/my/default/path"
I wouldn't suggest hardcoding those, incidentally.
before_filter: I see a lot of Rails developers using this as their favorite hammer and turning everything else into nails. Filters are useful when you want to expose functionality to all or substantially all methods in a controller. I don't know that is necessarily the case here, but your mileage may vary. You can combine the filter with the above two tricks if warranted.
A: I think that using before_filter on the edit action is the least obtrusive.
The referer should be reliable enough ... simply have a default in the case of no referer being available (say: someone bookmarked the edit page) and you should be fine.
A: Another approach is to load the form in an overlay, like http://flowplayer.org/tools/overlay/index.html, and then on ajax submit you can close the overlay. I do this in combination with an autocomplete to offer a "add new" option. Once the overlay form is submitted, I return some data in json, which is acted on to fill the autocomplete. (I made my own plugin to help with all that)
It takes a bit more work, but it really works well for the end user.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125467",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: How to make SVN only update files but not add new ones I have a repository of files which are unrelated to each other but are common to multiple projects. Each project might only need a subset of these files. For example:
/myRepo:
/jquery.js
/jquery.form.js
/jquery.ui.js
Project A requires jquery.js and jquery.form.js, whereas Project B requires jquery.js and jquery.ui.js
I could just do a checkout of myRepo into both projects, but that'd add a lot of unnecessary files into both. What I'd like is some sort of way for each Project to only get the files it needs. One way I thought it might be possible is if I put just the required files into each project and then run an svn update on it, but somehow stop SVN from adding new files to each directory. They'd still get the modifications to the existing files, but no unnecessary files would be added.
Is this possible at all?
A: Don't complicate yourself. Either pull out all files (what is the disadvatage of this ? a few more 100s of Ks of space ?), or divide the files into several directories, and only check out the needed directories (using the 'externals' property) in relevant projects.
A: If I understood your question correctly, you want to share code across projects? If so, look at the svn:externals property.
Externals explained in Subversion Red Book
svn:externals in Windows
A: K, this isn't perfect, but it will do. The biggest problem is that it will update each file individually:
find . -print | grep -v '\.svn' | xargs svn update
I am sure that someone with more find mojo can figure out a better way of handling the svn directory exclusion.
A: In our company we store all projects in the same repository, but within their own project folder. Then if code needs to be shared we move it out to a library folder. This helps reusability, and not violating the DRY principle.
So as far as the server is concerned you can export all of your libraries to a central place, and your projects wherever, without overlapping.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125468",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Need to create a dynamic ConfigurationSection that also holds the type I need to create a configuration section, that is able to store key-value pairs in an app.config file and the key-value pairs can be added runtime regardless of their type. It is also important that the value keeps its original type. I need to extend the following interface
public interface IPreferencesBackend
{
bool TryGet<T>(string key, out T value);
bool TrySet<T>(string key, T value);
}
At runtime, I can say something like:
My.Foo.Data data = new My.Foo.Data("blabla");
Pref pref = new Preferences();
pref.TrySet("foo.data", data);
pref.Save();
My.Foo.Data date = new My.Foo.Data();
pref.TryGet("foo.data", out data);
I tried with System.Configuration.Configuration.AppSettings, but the problem with that that it is storing the key-value pairs in a string array.
What I need is to have an implementation of System.Configuration.ConfigurationSection, where I can control the how the individual setting is serialized. I noticed that the settings generated by Visual Studio kind of do this. It is using reflection to create all the setting keys. what I need is to do this runtime and dynamically.
[System.Configuration.UserScopedSettingAttribute()]
[System.Diagnostics.DebuggerNonUserCodeAttribute()]
[System.Configuration.DefaultSettingValueAttribute("2008-09-24")]
public global::System.DateTime DateTime {
get {
return ((global::System.DateTime)(this["DateTime"]));
}
set {
this["DateTime"] = value;
}
}
A: Phil Haack has a great article on Creating Custom Configuration Sections
A: I found two great articles on codeproject.com that are explaining these issues in great detail.
Unraveling the Mysteries of .NET 2.0 Configuration
http://www.codeproject.com/KB/dotnet/mysteriesofconfiguration.aspx
User Settings Applied
http://www.codeproject.com/KB/dotnet/user_settings.aspx?display=PrintAll&fid=1286606&df=90&mpp=25&noise=3&sort=Position&view=Quick&select=2647446&fr=26
A: That's all you get in a an ASCII text file - strings. :-)
However, you can encode the "value" strings to include a type parameter like:
<key="myParam" value="type, value" />
for example:
<key="payRate" value="money,85.79"/>
then have your app do the conversion ...
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125470",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How can I wire users with their respective folders in ASP.NET? When userA uploads a file, his files will be uploaded to folderA, when userB, to folderB, and so on. Roles/Profiles in ASP.NET. Folders will be pre-existing. Anyone?
A: You'll probably want to hand-code that. There's nothing intrinsic to ASP.NET for managing user files. However, the code to do so should be relatively easy. Assuming the username is unique and never changes, you can combine the username with a path (use Path.Combine) and upload to that location. I would also lock down that location so that nobody else can access it.
A: The way that I've done it in the past is to use a base upload folder (say uploads) and in that folder create a folder using the user's ID from the DB. So the structure would be ..\uploads\145 for user with a user ID of 145.
The first thing that my code does is to check to see if the folder exists and if not then calls a Directory.Create() (or whatever the syntax is) to create the folder before uploading.
Further info that you might find helpful: I also rename the file using a GUID which avoids name conflicts if they upload 2 files with the same name. The downside is that you will normally need to maintain a table with the original filename and the physical (GUID) filename.
A: You can just check for the existance of the folder and create it for the user if it doesn't exists, but there are security implications for this. You might also want to try and store data in a database and tie it to a user.. this depends on what you are letting users upload I guess.
A: There are a few ways you can do this:
Using Forms Authentication
If you use forms authentication, you can set a convention wherein a user's username or id can serve as the basis for a path in your server where the user can upload a file. Note that your user will not have direct access to that folder: the user should be able to download the files from your server via your web application as well.
Using Windows Authentication
If you use windows (e.g., ActiveDirectory) authentication, you can provide user access to both the physical location of the folder and via a web application.
P.S. - Glad to see you here Marlon!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125473",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Rails Check box on MySql stored as a null or zero On my rails app I have a list of items (like a task list) and for each item there is a couple of check box to set parameters.
When I submit the form, the checked box are stored as zero and the unchecked as null on DB.
the question is: Is there a way to configure it? To store the data on a more traditional 0 or 1, because I think that storing null as false and 0 as true is a bit confusing, specially if another app (like a C app) needs to read the data.
A: Let's assume that the attribute you are working with is club_member as in "are you a club_member?".
Note that in Ruby/Rails, the way it is working now, if model.club_member will return false if it is not checked (value is null or in Ruby, nil) and true if it is checked (value is 0).
On the whole, I would strongly recommend that instead of letting other applications (like a C app) directly at your data, you should instead build an API in Ruby/Rails to expose the data from your application to external entities. In this manner, you will better encapsulate your application's internals and you won't have to worry about things like this.
However, all that being said, here is your answer:
Use:
value="1"
...attribute in your checkbox HTML tags, and set the default value of the boolean attribute (in your migration) to 0.
A: There are a number of potential reasons for this:
*
*Make sure that the column in the database is of type "boolean" in the migration
*Place a default on boolean values
*Use "check_box" on the form, not "check_box_tag"
*Some versions of Rails had this behaviour on generated scaffolds, I think setting the default fixed this, but I can't quite remember.
A: What about:
value.to_i.zero?
>> a=nil
=> nil
>> a.to_i.zero?
=> true
>> a=0
=> 0
>> a.to_i.zero?
=> true
>> a=3
=> 3
>> a.to_i.zero?
=> false
A: From the rails documentation:
An object is blank if it’s false, empty, or a whitespace string. For example, “”, “ “, nil, [], and {} are blank.
This simplifies:
if !address.nil? && !address.empty?
…to:
if !address.blank?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125496",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: How to change "3 errors prohibited this foobar from being saved" validation message in Rails? In my rails app I use the validation helpers in my active record objects and they are great. When there is a problem I see the standard "3 errors prohibited this foobar from being saved" on my web page along with the individual problems.
Is there any way I can override this default message with my own?
A: The error_messages_for helper that you are using to display the errors accepts a :header_message option that allows you to change that default header text. As in:
error_messages_for 'model', :header_message => "You have some errors that prevented saving this model"
The RubyOnRails API is your friend.
A: The "validates_" methods in your model can all generally be passed a :message => "My Validation Message" parameter.
I generally wrap errors in something like this:
<% if(!@model.errors.empty?) %>
<div id="error_message">
<h2>
<%= image_tag("error.png", :align => "top", :alt => "Error") -%>
Oops, there was a problem editing your information.
</h2>
<%= short_error_messages_for(:model) %>
</div>
<% end %>
Then in my application_helper I iterate over the errors and generate a simple list:
def short_error_messages_for(object_name)
object = instance_variable_get("@#{object_name}")
if object && !object.errors.empty?
content_tag("ul", object.errors.full_messages.collect { |msg| content_tag("li", msg) } )
else
""
end
end
That code is pretty old and probably not how I would write Ruby these days, but you get the gist.
A: You can iterate over the model.errors hash yourself instead of using the errors helper.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125512",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Managed language for scientific computing software Scientific computing is algorithm intensive and can also be data intensive. It often needs to use a lot of memory to run analysis and release it before continuing with the next. Sometime it also uses memory pool to recycle memory for each analysis. Managed language is interesting here because it can allow the developer to concentrate on the application logic. Since it might need to deal with huge dataset, performance is important too. But how can we control memory and performance with managed language?
A: You are asking a fundamentally flawed question. The entire point of managed languages is that you don't handle memory. That is handled by the Garbage Collector, while you can make certain actions to better allow it to do its job in an efficient manner, it is not your job to do its job.
The things you can do to improve performance in a world where performance is not controlled by you are simple. Make sure you don't hold onto references you don't need. And use stack based variables if you need more control over the situation.
A: F# seems to be somewhat targeted at this audience. There is actually a book called F# for scientists.
Also this question was asked over at Lambda the Ultimate.
A: You might be surprised at the number of people that use Matlab for this, and as it could be considered a programming language and certainly manages its own memory (with support for huge data sets, etc) then it should seriously be considered as a solution here.
Further, it will generate program code (may require a separate plugin?) so once you arrive at an algorithm you want to package up you can have it generate the C code to perform the work you originally had in your M script or simulink model.
-Adam
A: Not exactly sure what the question is, but you might want to check out Fortress
A: I think I would paraphrase the question by saying is the .NET memory manager capable of handling the job of memory management for scientific computing where traditionally hand tuned routines have been used for improving memory performance, especially for very large (GByte) matrices?
The author of this article certainly believes that it is:
Harness the Features of C# to Power Your Scientific Computing Projects
As others have pointed out, a major point of managed code is that you don't need to deal with memory management tasks yourself. This is a major advantage as it allows you to concentrate on the algorithms.
A: Python has become pretty big in scientific computing lately. It is a managed language, so you don't have to remember to free your memory. At the same time, it has packages for scientific and numerical computing (NumPy, SciPy), which gives you performance similar to compiled languages. Also, Python can be pretty easily integrated with C code.
Python is a very expressive language, making it easier to write and read than many traditional languages. It also resembles MATLAB in some ways, making it easier to use for scientists than, say, C++ or Fortran.
The University of Oslo has recently starting teaching Python as the default language for all science students outside the department of informatics (who still learn Java).
Simula Research Laboratory, which is heavily into scientific computing, partial differential equations etc., uses python extensively.
A: I would think that functional languages would be best suited to this type of task.
A: BlackBox Component Builder, developed by Oberon microsystems, is the component-based development environment for the programming language „Component Pascal“.
Due to its stability, performance and simplicity, BlackBox is perfectly suited for science and engineering applications.
http://www.oberon.ch/blackbox.html
(Disclosure: I work for Oberon microsystems)
Regards,
tamberg
A: The best option is Python with NumPy/ SciPy/ IPython. It has excellent performance because the core math is happening in libraries written in highly optimized C and Fortran. Since you interact with it using Python, everything from your perspective is clean and managed with extremely succinct, readable code and garbage collection.
A: The short answer is that you can control the memory and performance of programs written in managed languages by choosing a suitable language (like OCaml or F#) and learning how to optimize in that language. The long answer requires a book on the specific language you are using, such as OCaml for Scientists or Visual F# 2010 for Technical Computing.
The subjects you need to learn about are algorithmic optimizations, low-level optimizations, data structures and the internal representation of types in your chosen language. If you are writing parallel algorithms then it is also particularly important to learn about caches.
A: With a managed language you don't get that control as easily. The whole point in these languages is to handle malloc, garbage, and so on. Each managed language will handle that differently.
With Perl running out of memory is considered a fatal error. You can save the day via some small measure with $^M but this is only if your compiler has been compiled with that feature, and you add code provisions for it.
A: Because of its overhead, a .NET application will incur a performance penalty relative to an unmanaged application. However, because this overhead is more-or-less a constant unrelated to the overall size of the application (WARNING: over-simplification), it becomes relatively less of a penalty the larger the application.
So I would go with .NET (so long as it provides you with the libraries you need). Managing memory is a pain, and you have to do it a lot to be good at it. Within .NET, choose whatever language you're most comfortable, so long as it's not J# or VB.NET and is C#.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125516",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Grouping Activerecord query by a child attribute Is it possible to use an attribute of a child to group a query?
Post.find(:all, :include => [ :authors, :comments ], :group=>'authors.city')
does not work.
However, I am able to use author.city as part of the conditions.
A: The solution is to force the necessary join so that ActiveRecord can resolve "authors.city":
Post.find(:all, :include => [ :author, :comments ], :joins=>"INNER JOIN authors ON posts.author_id=authors.id", :group=>'authors.city')
A: If that's what you're using, then the syntax is wrong for the :group argument, it should be:
Post.find(:all, :include => [ :author, :comments ], :group=>'authors.city')
Make sure your :author and :comments associations are correct. If 'authors' is the actual table name, then you'll need a 'has_one :author' association in you Post model, and an Author model.
Associations need to be correct, too:
class Post < AR:Base
belongs_to :author
has_many :comments
end
class Author < AR:Base
has_many :posts
end
class Comment < AR:Base
belongs_to :post
end
And the db schema:
posts
id
author_id
authors
id
comments
id
post_id
This will let the query run correctly, however, now I'm getting an error with the results... the :group clause doesn't seem to be applied when :include is used.
A: Have a look at the query that is generated in your log file - you can often paste the query into your favourite MySQL tool to get a more detailed error.
You might actually need to provide an aggregate function to get the database to group correctly (this happens in MySQL rather than a syntax error sometimes).
A: Should the author include be pluralized?
Post.find(:all, :include => [ :authors, :comments ], :group=>'authors.city')
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125523",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Why can't I change the value of a segment register? (MASM) I decided to teach myself assembly language.
I have realized that my program will not compile if I attempt to change the value of any segment register.
Every article that I have found says that I can indeed change the value of at least 4 segment registers, so what gives?
I am really only interested in the why at this point, I don't have any real purpose in changing these addresses.
A: Are you writing windows executables?
In protected-mode (Win32), segment registers are not used any more.
Reference:
Memory model is also drastically
different from the old days of the
16-bit world. Under Win32, we need not
be concerned with memory model or
segment anymore! There's only one
memory model: Flat memory model.
There's no more 64K segments. The
memory is a large continuous space of
4 GB. That also means you don't have
to play with segment registers. You
can use any segment register to
address any point in the memory space.
That's a GREAT help to programmers.
This is what makes Win32 assembly
programming as easy as C.
A: You said you were interested in why, so:
In real mode, a segment is a 64K "window" to physical memory and these windows are spaced 16 bytes apart. In protected mode, a segment is a window to either physical or virtual memory, whose size and location is determined by the OS, and it has many other properties, including what privilege level a process must have to access it.
From here on, everything I say refers to protected mode.
There is a table in memory called the global descriptor table (GDT), which is where the information about these window sizes and locations and other properties are kept. There may also be local descriptor tables on a per-process basis, and they work in a similar way, so I'll just focus on the GDT.
The value you load into a segment register is known as a segment selector. It is an index into the GDT or LDT, with a bit of extra security information. Naturally if a program tries to load a descriptor which is outside the bounds of the GDT, an exception occurs. Also if the process does not have enough privilege to access the segment, or something else is invalid, an exception occurs.
When an exception occurs, the kernel handles it. This sort of exception would probably be classed as a segmentation fault. So the OS kills your program.
There's one final caveat: in the x86 instruction set, you can't load immediate values into segment registers. You must use an intermediate register or a memory operand or POP into the segment register.
MOV DS, 160 ;INVALID - won't assemble
MOV AX, 160 ;VALID - assembles, but will probably result in an
MOV DS, AX ;exception, and thus the death of your program
I think it should be pointed out that the architecture allows for heaps of segments. But AFAIK, when it comes to the mainstream x86 operating systems, segment registers serve only a few purposes:
*
*Security mechanisms, such as keeping user space processes from harming each other or the OS
*Dealing with multiple/multi-core processors
*Thread-local storage: as an optimization, some operating systems (including Linux and Windows) use segment registers for thread-local storage (TLS). Since threads share the same address space, it is hard for a thread to "know" where its TLS region is without using a system call or wasting a register... but since segment registers are practically useless, there's no harm in "wasting" them for the sake of fast TLS. Note that when setting this up, an OS might skip the segment registers and write directly to descriptor cache registers, which are "hidden" registers used to cache the GDT/LDT lookups triggered by references to the segment registers, in which case if you try to read from the segment registers you won't see it.
Apart from a segment per thread for TLS, really only a handful of segments (times the number of processors) are used, and only by the OS. Application programs can completely ignore the segment registers.
This is due to OS design, not to any technical limitations. There may be embedded operating systems that require user-space programs to work with the segment registers, though I don't know of any.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125532",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: WPF DataBinding with simple arithmetic operation? I want to add a constant value onto an incoming bound integer. In fact I have several places where I want to bind to the same source value but add different constants. So the ideal solution would be something like this...
<TextBox Canvas.Top="{Binding ElementName=mySource, Path=myInt, Constant=5}"/>
<TextBox Canvas.Top="{Binding ElementName=mySource, Path=myInt, Constant=8}"/>
<TextBox Canvas.Top="{Binding ElementName=mySource, Path=myInt, Constant=24}"/>
(NOTE: This is an example to show the idea, my actual binding scenario is not to the canvas property of a TextBox. But this shows the idea more clearly)
At the moment the only solution I can think of is to expose many different source properties each of which adds on a different constant to the same internal value. So I could do something like this...
<TextBox Canvas.Top="{Binding ElementName=mySource, Path=myIntPlus5}"/>
<TextBox Canvas.Top="{Binding ElementName=mySource, Path=myIntPlus8}"/>
<TextBox Canvas.Top="{Binding ElementName=mySource, Path=myIntPlus24}"/>
But this is pretty grim because in the future I might need to keep adding new properties for new constants. Also if I need to change the value added I need to go an alter the source object which is pretty naff.
There must be a more generic way than this? Any WPF experts got any ideas?
A: I believe you can do this with a value converter. Here is a blog entry that addresses passing a parameter to the value converter in the xaml. And this blog gives some details of implementing a value converter.
A: Using a value converter is a good solution to the problem as it allows you to modify the source value as it's being bound to the UI.
I've used the following in a couple of places.
public class AddValueConverter : IValueConverter
{
public object Convert(object value, Type targetType, object parameter, CultureInfo culture)
{
object result = value;
int parameterValue;
if (value != null && targetType == typeof(Int32) &&
int.TryParse((string)parameter,
NumberStyles.Integer, culture, out parameterValue))
{
result = (int)value + (int)parameterValue;
}
return result;
}
public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
{
throw new NotImplementedException();
}
}
Example
<Setter Property="Grid.ColumnSpan"
Value="{Binding
Path=ColumnDefinitions.Count,
RelativeSource={RelativeSource AncestorType=Grid},
Converter={StaticResource addValueConverter},
ConverterParameter=1}"
/>
A: I use a MathConverterthat I created to do all simple arithmatic operations with. The code for the converter is here and it can be used like this:
<TextBox Canvas.Top="{Binding SomeValue,
Converter={StaticResource MathConverter},
ConverterParameter=@VALUE+5}" />
You can even use it with more advanced arithmatic operations such as
Width="{Binding ElementName=RootWindow, Path=ActualWidth,
Converter={StaticResource MathConverter},
ConverterParameter=((@VALUE-200)*.3)}"
A: I've never used WPF, but I have a possible solution.
Can your binding Path map to a Map? If so, it should then be able to take an argument (the key). You'd need to create a class that implements the Map interface, but really just returns the base value that you initialized the "Map" with added to the key.
public Integer get( Integer key ) { return baseInt + key; } // or some such
Without some ability to pass the number from the tag, I don't see how you're going to get it to return different deltas from the original value.
A: I would use multibinding converter such as the following example:
C# code:
namespace Example.Converters
{
public class ArithmeticConverter : IMultiValueConverter
{
public object Convert(object[] values, Type targetType, object parameter, CultureInfo culture)
{
double result = 0;
for (int i = 0; i < values.Length; i++)
{
if (!double.TryParse(values[i]?.ToString(), out var parsedNumber)) continue;
if (TryGetOperations(parameter, i, out var operation))
{
result = operation(result, parsedNumber);
}
}
return result;
}
public object[] ConvertBack(object value, Type[] targetTypes, object parameter, CultureInfo culture)
{
return new[] { Binding.DoNothing, false };
}
private static bool TryGetOperations(object parameter, int operationIndex, out Func<double, double, double> operation)
{
operation = null;
var operations = parameter?.ToString().Split(',');
if (operations == null || operations.Length == 0) return false;
if (operations.Length <= operationIndex)
{
operationIndex = operations.Length - 1;
}
return Operations.TryGetValue(operations[operationIndex]?.ToString(), out operation);
}
public const string Add = "+";
public const string Subtract = "-";
public const string Multiply = "*";
public const string Divide = "/";
private static IDictionary<string, Func<double, double, double>> Operations = new Dictionary<string, Func<double, double, double>>
{
{ Add, (x, y) => x + y },
{ Subtract, (x, y) => x - y },
{ Multiply, (x, y) => x * y },
{ Divide, (x, y) => x / y }
};
}
}
XAML code:
<UserControl
xmlns:converters="clr-namespace:Example.Converters">
<UserControl.Resources>
<converters:ArithmeticConverter x:Key="ArithmeticConverter" />
</UserControl.Resources>
...
<MultiBinding Converter="{StaticResource ArithmeticConverter}" ConverterParameter="+,-">
<Binding Path="NumberToAdd" />
<Binding Path="NumberToSubtract" />
</MultiBinding>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125536",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
}
|
Q: Deploy multiple instance of reporting services or connect to multiple versions of DLLs Is there any way in SSRS2008 to deploy multiple instances of the ReportServer running separate code sets?
I'm developing a very specific deployment of reporting services where I have a number of custom extensions plugged in. But, my company typically deploys multiple versions of a release at once on the same server. I'm at a little bit of a loss of a good way to do this with reporting services.
I know that I have a few alternatives:
*
*Run multiple instances of Reporting Services with different code sets
The downside to this is that it's a little bit of a headache to upgrade and I'd rather not have multiple instances of the reporting databases. I'm not sure if they would play well together if they were targeting the same databases.
*
*Invoke/include the version specific DLLs on demand by reading from an HTTPRequest variable. (Assembly.LoadFile)
I have a feeling that this could have performance issues and it also sounds like a potential debugging nightmare. I have also not used Assembly.LoadFile before and I'm unsure of how much code I'd have to write that was unversioned to control the versioning system.
Anyone out there have experience with any of this?
A: You can install multiple RS front ends onto one DB backend.
It works well. We use it to have 2 primary RS boxes (load balanced) with 3rd BCP/DR hot standby box. They are all in the farm.
You can have multiple instances on the same box too.
How to: Configure a Report Server Scale-Out Deployment (Reporting Services Configuration)
A: FWIW I have been running two separate report server installations against the same databases without issues. The volumes are fairly low.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125540",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How can I show data in the header of a multipage SSRS 2005 report? This question was very helpful, however I have a list control in my report, and when the report grows over 1 page, data in the header only shows up on the last page of the report.
Apparently, hidden textboxes have to be on every page of the report for header to function properly. How do I do that? The only control I have in the list is a textbox with bunch of text that grows way over 1 page.
A: Although SSRS does not allow us to use DataSet fields in page headers, it allows us to refer to report items. So we could place a textbox (that takes its value from a DataSet field) anywhere in our report's body and set its Hidden property to true.
Then, we could easily refer to that textbox in the page header with an expression like: =ReportItems!TextBox1.Value and we are done. Note that the textbox that is being referred should be present on every page, or otherwise the header will print an empty value.
A: sExchange website to the rescue!!!
All I needed to do is to use Report Parameters with queried values from my dataset; and then reference =Parameters!Name.Value in the textbox in the header of the report.
A: Select Report Parameters, Add new parameter and check hidden, allow null and allow blank value.
If you are retrieving the values from database:
Under Available Values:
check "from query" radio button and provide dataset,value field and label fields.
Under Default Values:
check "from query" radio button and provide dataset,value fields.
Now provide the value for text box in the footer/header as =Parameters!Footer.Value (Footer is the parameter name).
A: the hidden text boxes can be placed within a rectangle that was a repeatwith property set to be your list item.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125541",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: What algorithm for a tic-tac-toe game can I use to determine the "best move" for the AI? In a tic-tac-toe implementation I guess that the challenging part is to determine the best move to be played by the machine.
What are the algorithms that can pursued? I'm looking into implementations from simple to complex. How would I go about tackling this part of the problem?
A: A typical algo for tic-tac-toe should look like this:
Board : A nine-element vector representing the board. We store 2 (indicating
Blank), 3 (indicating X), or 5 (indicating O).
Turn: An integer indicating which move of the game about to be played.
The 1st move will be indicated by 1, last by 9.
The Algorithm
The main algorithm uses three functions.
Make2: returns 5 if the center square of the board is blank i.e. if board[5]=2. Otherwise, this function returns any non-corner square (2, 4, 6 or 8).
Posswin(p): Returns 0 if player p can’t win on his next move; otherwise, it returns the number of the square that constitutes a winning move. This function will enable the program both to win and to block opponents win. This function operates by checking each of the rows, columns, and diagonals. By multiplying the values of each square together for an entire row (or column or diagonal), the possibility of a win can be checked. If the product is 18 (3 x 3 x 2), then X can win. If the product is 50 (5 x 5 x 2), then O can win. If a winning row (column or diagonal) is found, the blank square in it can be determined and the number of that square is returned by this function.
Go (n): makes a move in square n. this procedure sets board [n] to 3 if Turn is odd, or 5 if Turn is even. It also increments turn by one.
The algorithm has a built-in strategy for each move. It makes the odd numbered
move if it plays X, the even-numbered move if it plays O.
Turn = 1 Go(1) (upper left corner).
Turn = 2 If Board[5] is blank, Go(5), else Go(1).
Turn = 3 If Board[9] is blank, Go(9), else Go(3).
Turn = 4 If Posswin(X) is not 0, then Go(Posswin(X)) i.e. [ block opponent’s win], else Go(Make2).
Turn = 5 if Posswin(X) is not 0 then Go(Posswin(X)) [i.e. win], else if Posswin(O) is not 0, then Go(Posswin(O)) [i.e. block win], else if Board[7] is blank, then Go(7), else Go(3). [to explore other possibility if there be any ].
Turn = 6 If Posswin(O) is not 0 then Go(Posswin(O)), else if Posswin(X) is not 0, then Go(Posswin(X)), else Go(Make2).
Turn = 7 If Posswin(X) is not 0 then Go(Posswin(X)), else if Posswin(X) is not 0, then Go(Posswin(O)) else go anywhere that is blank.
Turn = 8 if Posswin(O) is not 0 then Go(Posswin(O)), else if Posswin(X) is not 0, then Go(Posswin(X)), else go anywhere that is blank.
Turn = 9 Same as Turn=7.
I have used it. Let me know how you guys feel.
A: Since you're only dealing with a 3x3 matrix of possible locations, it'd be pretty easy to just write a search through all possibilities without taxing you computing power. For each open space, compute through all the possible outcomes after that marking that space (recursively, I'd say), then use the move with the most possibilities of winning.
Optimizing this would be a waste of effort, really. Though some easy ones might be:
*
*Check first for possible wins for
the other team, block the first one
you find (if there are 2 the games
over anyway).
*Always take the center if it's open
(and the previous rule has no
candidates).
*Take corners ahead of sides (again,
if the previous rules are empty)
A: The strategy from Wikipedia for playing a perfect game (win or tie every time) seems like straightforward pseudo-code:
Quote from Wikipedia (Tic Tac Toe#Strategy)
A player can play a perfect game of Tic-tac-toe (to win or, at least, draw) if they choose the first available move from the following list, each turn, as used in Newell and Simon's 1972 tic-tac-toe program.[6]
*
*Win: If you have two in a row, play the third to get three in a row.
*Block: If the opponent has two in a row, play the third to block them.
*Fork: Create an opportunity where you can win in two ways.
*Block Opponent's Fork:
Option 1: Create two in a row to force
the opponent into defending, as long
as it doesn't result in them creating
a fork or winning. For example, if "X"
has a corner, "O" has the center, and
"X" has the opposite corner as well,
"O" must not play a corner in order to
win. (Playing a corner in this
scenario creates a fork for "X" to
win.)
Option 2: If there is a configuration
where the opponent can fork, block
that fork.
*Center: Play the center.
*Opposite Corner: If the opponent is in the corner, play the opposite
corner.
*Empty Corner: Play an empty corner.
*Empty Side: Play an empty side.
Recognizing what a "fork" situation looks like could be done in a brute-force manner as suggested.
Note: A "perfect" opponent is a nice exercise but ultimately not worth 'playing' against. You could, however, alter the priorities above to give characteristic weaknesses to opponent personalities.
A: What you need (for tic-tac-toe or a far more difficult game like Chess) is the minimax algorithm, or its slightly more complicated variant, alpha-beta pruning. Ordinary naive minimax will do fine for a game with as small a search space as tic-tac-toe, though.
In a nutshell, what you want to do is not to search for the move that has the best possible outcome for you, but rather for the move where the worst possible outcome is as good as possible. If you assume your opponent is playing optimally, you have to assume they will take the move that is worst for you, and therefore you have to take the move that MINimises their MAXimum gain.
A: You can have the AI play itself in some sample games to learn from. Use a supervised learning algorithm, to help it along.
A: An attempt without using a play field.
*
*to win(your double)
*if not, not to lose(opponent's double)
*if not, do you already have a fork(have a double double)
*if not, if opponent has a fork
*
*search in blocking points for possible double and fork(ultimate win)
*if not search forks in blocking points(which gives the opponent the most losing possibilities )
*if not only blocking points(not to lose)
*if not search for double and fork(ultimate win)
*if not search only for forks which gives opponent the most losing possibilities
*if not search only for a double
*if not dead end, tie, random.
*if not(it means your first move)
*
*if it's the first move of the game;
*
*give the opponent the most losing possibility(the algorithm results in only corners which gives 7 losing point possibility to opponent)
*or for breaking boredom just random.
*if it's second move of the game;
*
*find only the not losing points(gives a little more options)
*or find the points in this list which has the best winning chance(it can be boring,cause it results in only all corners or adjacent corners or center)
Note: When you have double and forks, check if your double gives the opponent a double.if it gives, check if that your new mandatory point is included in your fork list.
A: The brute force method of generating every single possible board and scoring it based on the boards it later produces further down the tree doesn't require much memory, especially once you recognize that 90 degree board rotations are redundant, as are flips about the vertical, horizontal, and diagonal axis.
Once you get to that point, there's something like less than 1k of data in a tree graph to describe the outcome, and thus the best move for the computer.
-Adam
A: Rank each of the squares with numeric scores. If a square is taken, move on to the next choice (sorted in descending order by rank). You're going to need to choose a strategy (there are two main ones for going first and three (I think) for second). Technically, you could just program all of the strategies and then choose one at random. That would make for a less predictable opponent.
A: This answer assumes you understand implementing the perfect algorithm for P1 and discusses how to achieve a win in conditions against ordinary human players, who will make some mistakes more commonly than others.
The game of course should end in a draw if both players play optimally. At a human level, P1 playing in a corner produces wins far more often. For whatever psychological reason, P2 is baited into thinking that playing in the center is not that important, which is unfortunate for them, since it's the only response that does not create a winning game for P1.
If P2 does correctly block in the center, P1 should play the opposite corner, because again, for whatever psychological reason, P2 will prefer the symmetry of playing a corner, which again produces a losing board for them.
For any move P1 may make for the starting move, there is a move P2 may make that will create a win for P1 if both players play optimally thereafter. In that sense P1 may play wherever. The edge moves are weakest in the sense that the largest fraction of possible responses to this move produce a draw, but there are still responses that will create a win for P1.
Empirically (more precisely, anecdotally) the best P1 starting moves seem to be first corner, second center, and last edge.
The next challenge you can add, in person or via a GUI, is not to display the board. A human can definitely remember all the state but the added challenge leads to a preference for symmetric boards, which take less effort to remember, leading to the mistake I outlined in the first branch.
I'm a lot of fun at parties, I know.
A: A Tic-tac-toe adaptation to the min max algorithem
let gameBoard: [
[null, null, null],
[null, null, null],
[null, null, null]
]
const SYMBOLS = {
X:'X',
O:'O'
}
const RESULT = {
INCOMPLETE: "incomplete",
PLAYER_X_WON: SYMBOLS.x,
PLAYER_O_WON: SYMBOLS.o,
tie: "tie"
}
We'll need a function that can check for the result. The function will check for a succession of chars. What ever the state of the board is, the result is one of 4 options: either Incomplete, player X won, Player O won or a tie.
function checkSuccession (line){
if (line === SYMBOLS.X.repeat(3)) return SYMBOLS.X
if (line === SYMBOLS.O.repeat(3)) return SYMBOLS.O
return false
}
function getResult(board){
let result = RESULT.incomplete
if (moveCount(board)<5){
return result
}
let lines
//first we check row, then column, then diagonal
for (var i = 0 ; i<3 ; i++){
lines.push(board[i].join(''))
}
for (var j=0 ; j<3; j++){
const column = [board[0][j],board[1][j],board[2][j]]
lines.push(column.join(''))
}
const diag1 = [board[0][0],board[1][1],board[2][2]]
lines.push(diag1.join(''))
const diag2 = [board[0][2],board[1][1],board[2][0]]
lines.push(diag2.join(''))
for (i=0 ; i<lines.length ; i++){
const succession = checkSuccesion(lines[i])
if(succession){
return succession
}
}
//Check for tie
if (moveCount(board)==9){
return RESULT.tie
}
return result
}
Our getBestMove function will receive the state of the board, and the symbol of the player for which we want to determine the best possible move. Our function will check all possible moves with the getResult function. If it is a win it will give it a score of 1. if it's a loose it will get a score of -1, a tie will get a score of 0. If it is undetermined we will call the getBestMove function with the new state of the board and the opposite symbol. Since the next move is of the oponent, his victory is the lose of the current player, and the score will be negated. At the end possible move receives a score of either 1,0 or -1, we can sort the moves, and return the move with the highest score.
const copyBoard = (board) => board.map(
row => row.map( square => square )
)
function getAvailableMoves (board) {
let availableMoves = []
for (let row = 0 ; row<3 ; row++){
for (let column = 0 ; column<3 ; column++){
if (board[row][column]===null){
availableMoves.push({row, column})
}
}
}
return availableMoves
}
function applyMove(board,move, symbol) {
board[move.row][move.column]= symbol
return board
}
function getBestMove (board, symbol){
let availableMoves = getAvailableMoves(board)
let availableMovesAndScores = []
for (var i=0 ; i<availableMoves.length ; i++){
let move = availableMoves[i]
let newBoard = copyBoard(board)
newBoard = applyMove(newBoard,move, symbol)
result = getResult(newBoard,symbol).result
let score
if (result == RESULT.tie) {score = 0}
else if (result == symbol) {
score = 1
}
else {
let otherSymbol = (symbol==SYMBOLS.x)? SYMBOLS.o : SYMBOLS.x
nextMove = getBestMove(newBoard, otherSymbol)
score = - (nextMove.score)
}
if(score === 1) // Performance optimization
return {move, score}
availableMovesAndScores.push({move, score})
}
availableMovesAndScores.sort((moveA, moveB )=>{
return moveB.score - moveA.score
})
return availableMovesAndScores[0]
}
Algorithm in action, Github, Explaining the process in more details
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125557",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "64"
}
|
Q: SQL Job Status I am actually working on SP in SQL 2005. Using SP i am creating a job and am scheduling it for a particular time. These jobs take atleast 5 to 10 min to complete as the database is very huge. But I am not aware of how to check the status of the Job. I want to know if it has got completed successfully or was there any error in execution. On exception i also return proper error code. But i am not aware of where i can check for this error code.
A: This is what I could find, maybe it solves your problem:
*
*SP to get the current job activiity.
exec msdb.dbo.sp_help_jobactivity @job_id = (your job_id here)
You can execute this SP and place the result in a temp table and get the required result from there.
Otherwise have a look at these tables:
*
*msdb.dbo.sysjobactivity
*msdb.dbo.sysjobhistory
Run the following to see the association between these tables.
exec sp_helptext sp_help_jobactivity
A: --Copy in Query analizer and format it properly so you can understand it easyly
--To execute your task(Job) using Query
exec msdb.dbo.sp_start_job @job_name ='Job Name',@server_name = server name
-- After executing query to check weateher it finished or not
Declare @JobId as varchar(36)
Select @JobId = job_id from sysjobs where name = 'Your Job Name'
Declare @JobStatus as int set @JobStatus = -1
While @JobStatus <= -1
Begin
--Provide TimeDelay according your Job
select @JobStatus = isnull(run_status ,-1)
from sysjobactivity JA,sysjobhistory JH
where JA.job_history_id = JH.instance_id and JA.job_id = @JobId
End
select @JobStatus
null = Running
1 = Fininshed successfully
0 = Finished with error
--Once your Job will fininsh you'll get result
A: I got a better code from here
Use msdb
go
select distinct j.Name as "Job Name", j.description as "Job Description", h.run_date as LastStatusDate,
case h.run_status
when 0 then 'Failed'
when 1 then 'Successful'
when 3 then 'Cancelled'
--when 4 then 'In Progress'
end as JobStatus
from sysJobHistory h, sysJobs j
where j.job_id = h.job_id and h.run_date =
(select max(hi.run_date) from sysJobHistory hi where h.job_id = hi.job_id)
order by 1
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125570",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: .NET ODBC Connection Pooling I open a connection like this:
Using conn as New OdbcConnection(connectionString)
conn.Open()
//do stuff
End Using
If connection pooling is enabled, the connection is not physically closed but released to the pool and will get reused. If it is disabled, it will be physically closed.
Is there any way of knowing programmatically if connection pooling is enabled or not? and the number of used and unused connections currently open in the pool?
EDIT: I need to get this information from within the program, I can't go and check it manually on every single PC where the program will be deployed.
A: MSDN as in-depth guidelines on this
Configuring Connection Pooling from
the Data Source Administrator
[snip]
Alternatively, you can start the ODBC
Data Source Administrator at the Run
prompt. On the taskbar, click Start,
click Run, and then type Odbcad32.
The tab for managing connection
pooling is found in the ODBC Data
Source Administrator dialog box in
version ODBC 3.5 and later.
Configuring Connection Pooling from
the Registry
For versions prior to version 3.5 of
the ODBC core components, you need to
modify the registry directly to
control the connection pooling
CPTimeout value.
Pooling is always handled by data server software. The whole point being is that in .NET you shouldn't have to worry about it (for example, this is why you should always use the SqlConnection when working with SQL Server - part of it is that it enables the connection pooling).
Update
On Vista, just type "ODBC" in the Start menu and it will find the app for you.
Update Following Clarification from OP
In terms of determining if connection pooling is enabled on each machine, looking at the MSDN guidelines I whould say you would best best if you check the registry values (See this article for pointers on registry access).
TBH though, unless the client machines are really crappy, I would possibly not even bother.. AFAIK it is enabled by default, and opening connections on a client machine (in my experience) has never been a big deal. It only really becomes a big deal when lots are being opened.
A: Looks like you can just read this registry key:
[HKEYLOCALMACHINE]\SOFTWARE\ODBC\ODBCINST.INI\SQL Server\CPTimeout
(or some variant thereof, depending on your OS and user account). If the value is 0, then connection pooling is disabled. If it's any value above 0, it's enabled.
See:
http://msdn.microsoft.com/en-us/library/ms810829.aspx
I'm not sure about getting the number of open connections. Just curious: why do you need to know the number?
A: To determine the number of open connections on each db, try this sql - I got it from a document on internet
select db_name(dbid) , count(*) 'connections count'
from master..sysprocesses
where spid > 50 and spid @@spid
group by db_name(dbid)
order by count(*) desc
Spids <=50 are used by sqlserver. So the above sql would tell you the connection used by your programs.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125577",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: What are the advantages of using the C++ Boost libraries? So, I've been reading through and it appears that the Boost libraries get used a lot in practice (not at my shop, though). Why is this? and what makes it so wonderful?
A: It adds libraries that allow for a more modern approach to C++ programming.
In my experience many C++ programmers are really the early 1990s C++ programmers, pretty much writing C++ classes, not a lot of use of generics. The more modern approach uses generics to compose software together in manner thats more like dynamic languages, yet you still get type checking / performance in the end. It is a little bit ugly to look at. But once you get over the syntax issues it really is quite nice. Boost gives you a lot of the tools you need to compose stuff easily. smart pointers, functions, lambdas, bindings, etc. Then there are boost libraries which exploit this newer way of writing C++ to provide things like networking, regex, etc etc...
if you are writing lots of for loops, or hand rolling function objects, or doing memory management, then you definitely should check boost out.
A: BOOST's a collection of libraries filling needs common to many C++ projects. Generally, they do prioritise correctness, reusability, portability, run-time performance, and space-efficiency over readability of BOOST implementation code, or sometimes compile times. They tend not to cover complete high-level functional requirements (e.g. application frameworks), and instead (thankfully) offer building blocks that can be more freely combined without dictating or dominating the application design.
The important reasons to consider using BOOST include:
*
*most libraries are pretty well tested and designed: they generally get a reasonably sound review by some excellent programmers, compared to by people with home-brew solutions in the same problem space, and widely used enough to gather extensive real-world feedback
*it's already written and your solution probably isn't
*it's pretty portable (but that varies per library)
*more people in the C++ community will have a head-start in helping you with your code
*BOOST is often a proving ground for introduction to the C++ Standard, so you'll have less work to do in rewriting your code to be compatible with future Standards sans BOOST
*due to the community demand, compiler vendors are more likely to test and react to issues of correctness with BOOST usage
*familiarity with boost libraries will help you do similar work on other projects, possibly in other companies, where whatever code you might write now might not be available for reuse
The libraries are described in a line or two here: http://www.boost.org/doc/libs/.
A: Because the C++ standard library isn't all that complete.
A: From the home page:
"...one of the most highly regarded and expertly designed C++ library projects in the world."
— Herb Sutter and Andrei Alexandrescu, C++ Coding Standards
"Item 55: Familiarize yourself with Boost."
— Scott Meyers, Effective C++, 3rd Ed.
"The obvious solution for most programmers is to use a library that provides an elegant and efficient platform independent to needed services. Examples are BOOST..."
— Bjarne Stroustrup, Abstraction, libraries, and efficiency in C++
So, it's a range of widely used and accepted libraries, but why would you need it?
If you need:
*
*regex
*function binding
*lambda functions
*unit tests
*smart pointers
*noncopyable, optional
*serialization
*generic dates
*portable filesystem
*circular buffers
*config utils
*generic image library
*TR1
*threads
*uBLAS
and more when you code in C++, have a look at Boost.
A: Boost basically the synopsis of what the Standard will become, besides with all the peer review and usage that Boost gets you can be pretty sure your getting quite a good deal for your dependencies.
However most shops don't use Boost, because its an External Dependency. And in reality reducing External dependencies is very important as well.
A: Anything with Kevlin Henney's involvement should be taken note of.
A: Boost is to C++ sort of like .NET Framework is to C#, but maybe on a smaller scale.
A: Because they add many missing things to the standard library, so much so some of them are getting included in the standard.
Boost people are not lying:
Why should an organization use Boost?
In a word, Productivity. Use of
high-quality libraries like Boost
speeds initial development, results in
fewer bugs, reduces
reinvention-of-the-wheel, and cuts
long-term maintenance costs. And since
Boost libraries tend to become de
facto or de jure standards, many
programmers are already familiar with
them.
Ten of the Boost libraries are
included in the C++ Standard Library's
TR1, and so are slated for later full
standardization. More Boost libraries
are in the pipeline for TR2. Using
Boost libraries gives an organization
a head-start in adopting new
technologies.
Many organization already use programs
implemented with Boost, like Adobe
Acrobat Reader 7.0.
A: I use the filesystem library quit a bit, and the boost::shared_ptr is pretty nifty. I hear it does other things too.
A: A few Boost classes are very useful (shared_ptr), but I think they went a bit nuts with traits and concepts in Boost. Compile times and huge binary sizes are completely insane with Boost, as is the case with any template-heavy code. There has to be a balance. I'm not sure if Boost has found it.
A: Boost is used so extensively because:
*
*It is open-source and peer-reviewed.
*It provides a wide range of platform agnostic functionality that STL missed.
*It is a complement to STL rather than a replacement.
*Many of Boost developers are on the C++ standard committee. In fact, many parts of Boost is considered to be included in the next C++ standard library.
*It is documented nicely.
*Its license allows inclusion in open-source and closed-source projects.
*Its features are not usually dependent on each other so you can link only the parts you require. [Luc Hermitte's comment]
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/125580",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "135"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.